Category: OpenShift

  • Bridging the Gap: Shared NFS Storage Between VMs and OpenShift

    When migrating workloads to OpenShift, one of the most common hurdles is data sharing. You might have a legacy VM writing logs or processing files and a modern containerized app that needs to read them—or vice versa.

    While OpenShift natively supports NFSv4, getting “identical visibility” across both environments requires a bit of finesse. Here is how to handle NFS mounting without compromising security or breaking the OpenShift security model.

    The “Don’t Do This” List

    Before we dive into the solution, it’s important to understand why the “obvious” paths often lead to trouble:

    • Avoid Custom SCCs for Direct Container Mounts You could technically mount the NFS share directly inside the container. However, in OpenShift, Pods run under restricted Security Context Constraints (SCCs). Bypassing these with a custom SCC opens up attack vectors. It’s better to let the platform handle the mount.
    • Don’t Hack the CSI Driver You might be tempted to force a dynamic provisioner to use a specific root path. This is a bad move. CSI drivers create unique subfolders for a reason: to prevent App A from accidentally (or maliciously) seeing App B’s data. Breaking this breaks your security isolation.

    The Solution: Static PersistentVolumes

    The most robust way to ensure a VM and a Pod see the exact same folder is by using a Static PersistentVolume (PV). This bypasses the dynamic provisioner’s tendency to create unique subfolders, allowing you to point OpenShift exactly where the VM is looking.

    1. Define the Static PersistentVolume

    You must manually define the PV. This allows you to hardcode the server and path to match the VM’s mount point.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: shared-pv
    spec:
      capacity:
        storage: 100Gi # Required but not used with nfs
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain # Data survives PVC deletion
      nfs:
        path: /srv/nfs_dir  # The identical path used by your VM
        server: nfs-server.example.com

    Important Server-Side Config To avoid “UID/GID havoc,” ensure your NFS server export is configured with: rw,sync,no_root_squash,no_all_squash. This prevents the server from rewriting IDs, which is vital when OpenShift’s secure containers use random UIDs. See the Cloud Pak for Data article for more details

    2. Create the PersistentVolumeClaim (PVC)

    Next, create a PVC that binds specifically to the static PV you just created. By setting the storageClassName to an empty string, you tell OpenShift not to look for a dynamic provisioner.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: shared-pvc
      namespace: your-app-namespace
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Gi
      volumeName: shared-pv # Direct binding to the static PV
      storageClassName: "" # Keep this empty to avoid dynamic provisioning

    3. Mount the PVC in your Pod

    Finally, reference the PVC in your Pod’s volume definition. This is where the magic happens: the container sees the filesystem exactly as the VM does.

    apiVersion: v1
    kind: Pod
    metadata:
      name: shared-data-app
    spec:
      containers:
      - name: app-container
        image: my-app-image
        volumeMounts:
        - name: nfs-storage
          mountPath: /var/data/shared
      volumes:
      - name: nfs-storage
        persistentVolumeClaim:
          claimName: shared-pvc

    Mount Options

    NFS can be picky. If your server requires specific versions or security flavors, add a mountOptions section to your PV definition to match the VM’s parameters exactly (e.g., nfsvers=4.1 or sec=sys).

    Summary

    By using a Static PV, you treat the NFS share as a known, fixed resource rather than a disposable volume. This keeps your OpenShift environment secure, your SCCs restricted, and your data perfectly synced between your infrastructure layers.

  • Introducing OpenShift Installer Provisioned Infrastructure (IPI) for IBM PowerVC (TechPreview)

    With Red Hat OpenShift Container Platform (OCP) 4.21, there is a new Tech Preview of powervc platform. The technical preview provides early access to this installer platform. The technical preview enables you to test and provide feedback on the simplified deployment of OCP on IBM Power Virtual Center (PowerVC) managed infrastructure, offering a powerful combination of enterprise-grade reliability and container orchestration. For administrators using IBM PowerVC, the Installer Provisioned Infrastructure (IPI) method simplifies the deployment process by automating the provisioning of underlying infrastructure resources.

    We welcome feedback and any thoughts.

    Thanks to the dev leaders and QE Team

    More details are at https://community.ibm.com/community/user/blogs/paul-bastide/2026/02/24/introducing-openshift-installer-provisioned-infras

  • Quick Fix: Resolving tmpfs Space Constraints on IBM Power (ppc64le)

    If you are deploying OpenShift on IBM Power and encountering failures during the node-image pull, you are likely hitting a known capacity limit in the temporary filesystem.

    The Problem

    In recent 4.x builds (specifically identified in OCPBUGS-70168), the default 4GiB allocation for /var/ostreecontainer in tmpfs is not enough for Power architectures. As the node-image attempts to pull and pivot to the latest operating system levels, it falls short by approximately 200MiB.

    The failure manifests in the logs as a “min-free-space” error:

    error: Writing content object: min-free-space-percent '3%' would be exceeded...


    The Solution: Manual Remount

    While PR #10304 is being reviewed and merged, you can resolve this manually by increasing the size of the mount.

    When to apply: This must be executed early in the boot process, specifically after the tmpfs is initialized but before the node-image-pull.sh script attempts to fetch the release layers.

    sudo mount -o remount,size=5G /var/ostreecontainer
    

    Key Considerations

    • Memory Usage: Increasing this limit “steals” an additional 1GB from main system memory. Ensure your LPAR or VM has enough overhead to accommodate this temporary spike.
    • Persistence: This is a volatile change to tmpfs. Once the node pivots and reboots into the new image, the temporary filesystem is cleared.
  • For Your Awareness: Selecting the Right OpenShift Cluster Type for Power Virtual Server IPI

    My colleague Christy has posted a new article Selecting the Right OpenShift Cluster Type for Power Virtual Server IPI

    The net: You don’t want to select a random type, be intentional. The key difference lies in Access and Internet Connectivity.

    ✅ Public Cluster: Reachable by anyone, simple setup via CIS DNS. ✅ Private Cluster: Internal users only. Uses a Bastion/VPN. Outbound updates stay enabled. ✅ Disconnected: Air-gapped. High security, high complexity (requires image mirroring).

    Pro-Tip: Most enterprise production workloads land in the Private category—it offers the best balance of security without the operational headache of a fully air-gapped environment.

    Learn more at https://community.ibm.com/community/user/blogs/christy-norman/2026/02/16/selecting-the-right-openshift-cluster-install-meth

  • Mirroring for OpenShift on IBM Power in Disconnected Environments

    Many IBM customers run IBM Power workloads in secure environment, where the convenience of a direct internet connection for the workload is restricted or strictly prohibited.

    For OpenShift Container Platform clusters in these Disconnected environments, the clusters are configured to retrieve release images from a registry or retrieve update paths and recommendations for the cluster from a secure location. To configure the cluster without a direct internet connection, you configure your cluster to access the secure location in the disconnected environment.

    There are two levels of disconnected environments, Disconnected (Restricted Network) and Air-Gapped (Fully Isolated). Air Gapped further restricts access with no physical or logical connection to any outside network, where as restircited has limited access to the network, and passes connections through a jumpbox.

    The OpenShift team has updated their support for Disconnected environments with a release of oc-mirror version 2. Customers are encouraged to change from the prior standard oc adm release mirror to the new oc-mirror --v2 to get the best disconnected experience (signatures,attestations,performance).

    • oc adm mirror: Manual, script-heavy, and struggles with Operator Lifecycle Manager (OLM). You often ended up mirroring thousands of images you didn’t need just to get one Operator working.
    • oc-mirror (v2): A declarative, plugin-based approach. You define an ImageSetConfiguration file (YAML), and the tool calculates exactly which versions and dependencies are needed.

    This article shows how to mirror a ppc64le payload (release).

    1. Download the latest oc-mirror binary from the OpenShift mirror site and ensure it’s in your $PATH. You can download from https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.21.2/oc-mirror.tar.gz

    Note: As of OpenShift 4.18+, v1 is officially deprecated.

    1. Define Your ImageSet and call it imageset-config.yaml. You can mirror the multi payload by switching from ppc64le to multi.
    apiVersion: mirror.openshift.io/v2alpha1
    kind: ImageSetConfiguration
    mirror:
      platform:
        architectures:
          - "ppc64le"
        channels:
          - name: stable-4.21
            minVersion: 4.21.0
            maxVersion: 4.21.1
    
    1. Run the Mirroring

    a. For Disconnected (Mirror-to-Mirror):

    oc mirror --config imageset-config.yaml docker://local-registry.internal:5000 --v2
    

    b. For Air-Gapped (Mirror-to-Disk):

    1. On the Internet-connected machine:
    oc mirror --config imageset-config.yaml file://my-mirror-bundle --v2
    
    1. Move the my-mirror-bundle folder to your isolated environment.
    2. On the Air-gapped machine:
    oc mirror --from file://my-mirror-bundle docker://local-registry.internal:5000 --v2
    

    Summary

    To support IBM Power workloads in high-security environments, oc-mirror v2 replaces manual, script-heavy mirroring with a declarative, YAML-based approach that automatically calculates required dependencies for Disconnected or Air-Gapped clusters. This transition to a plugin-based architecture (v2) ensures better performance, security, and reproducibility for maintaining ppc64le payloads without a direct internet connection.

    We encourage you to switch to using oc-mirror --v2 as soon as possible.

  • 🚀 Multi-Arch Tuning Operator v1.2.2 is now live!

    Red Hat released 1.2.2 of the Multi-Arch Tuning Operator. This update focuses on enhancing stability, security, and developer experience for managing workloads across diverse CPU architectures in your OpenShift cluster’s compute plane.

    ✨ What’s New in v1.2.2?

    • UBI 9 Minimal Base Image: The operator has been migrated to use the Red Hat Universal Base Image (UBI) 9 Minimal. This change provides a smaller, more secure footprint and better compatibility with modern OpenShift ecosystems.
    • API Cleanup: We’ve removed unused fields from the CustomResource to streamline configuration and reduce clutter.
    • Upgraded Dependencies: To ensure the highest performance and security standards, the operator now uses:
    • Operator SDK v1.33
    • Go 1.25.3
    • Controller-Runtime 1.34.1
    • Security Patches: Includes critical updates to underlying libraries (e.g., sigstore/fulcio to fix CVE-2025-66564).

    If you’re running a cluster with mixed architectures (like x86_64 and ppc64le), this operator is the “easy button” for ensuring your pods land on the right nodes without manual headaches.

  • OpenShift 4.21.0: Using rhel-coreos-10 nodes

    With OpenShift 4.21.0, you will see two new entries in the release payload for rhel-coreos-10 and rhel-coreos-10-extensions. These are node images for RHEL 10. This capability is a DevPreview/Tech Preview, and will be available as a stable/generally available feature in a future release.

    An example of the release payload entry:

    rhel-coreos-10                                 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca2fc3c3f56da0772fe6d428b78c4ce4ed36afaf9cc5a71f6c310ffb1673772a
    rhel-coreos-10-extensions                      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:077b897ed0cd007ed9b88762d6834fcfc0b9508538ddc9df1dbc0917231e45fc
    

    Note, there is a limitation that you use RHEL10 nodes with Power 10 or Power 11 based systems.

    You can create a new MachineConfigPool with the name rhel10 using Multi-Architecture Compute: Supporting Architecture Specific Operating System and Kernel Parameters. The matchExpression applies the worker and rhel10 MachineConfig objects to the MachineConfigPool members. The members are determined based on the matchLabels.

    cat <<EOF | oc apply -f -
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfigPool
    metadata:
      name: rhel10
    spec:
      machineConfigSelector:
        matchExpressions:
          - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,rhel10]}
      nodeSelector:
        matchLabels:
          node-role.kubernetes.io/rhel10: ""
    EOF
    

    Verify the power MachineConfigPool is created.

    Find the rhel-coreos image

    oc adm release info --image-for rhel-coreos
    

    Now that the rhel10 MachineConfigPool is created.

    cat <<EOF | oc apply -f -
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: rhel10
      name: 99-override-image-rhel10
    spec:
      osImageURL: "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca2fc3c3f56da0772fe6d428b78c4ce4ed36afaf9cc5a71f6c310ffb1673772a"
    EOF
    

    You have seen how to use RHEL10 nodes with OpenShift on IBM Power.

  • OpenShift 4.21.0 ClusterImagePolicy Feature enforces signature verification

    With OpenShift Container Platform 4.21, Red Hat further improved supply chain security with signature verification for the release image. The verification is controlled by the ClusterImagePolicy which specifies the Root CA and a scope of quay.io/openshift-release-dev/ocp-release. When an image is used on a node, the signatures are pulled from the mirror along with the image, and verified before starting a container.

    If you are running a disconnected cluster and the signature is missing from your mirror, you’ll see the following:

    • Install: The bootstrap node hangs and does not install as the signature is not verified.
    • Upgrade: The ClusterImagePolicy enforces signature verification. If the signatures are missing from your mirror, the Cluster Version Operator (CVO) will be blocked, preventing node updates.

    In order to continue, you may do one of the following:

    1. Use oc mirror --v2 to mirror your content. This feature automatically honors signatures … see Mirroring images for a disconnected installation using the oc-mirror plugin
    2. If you are currently using oc adm release mirror, you can copy the sig file for the release payload:
    $ oc image mirror quay.io/openshift-release-dev/ocp-release:${RELEASE_DIGEST}.sig registry.example.com/openshift/whatever:${RELEASE_DIGEST}.sig
    

    RELEASE_DIGEST:: Specifies your digest image with the : character replaced by a - character. For example: sha256:884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee becomes sha256-884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee.sig.

    It is recommended you switch to using oc mirror --v2

    Good luck with your disconnected clusters, and ensure image signatures are present in your local mirror using one of the mirroring methods.

    References

    1. Red Hat OpenShift Docs: Chapter 12. Manage secure signatures with sigstore
    2. Red Hat Developer: How to verify container signatures in disconnected OpenShift
    3. Red Hat Developer: Verify Cosign bring-your-own PKI signature on OpenShift
    4. Red Hat Developer: How oc-mirror version 2 enables disconnected installations in OpenShift 4.16
  • cert-manager for Red Hat OpenShift: using self-signed certs in your cluster

    In Red Hat OpenShift, cert-manager is a specialized operator that automates the management of X.509 (SSL/TLS) certificates. It acts as a “Certificates-as-a-Service” tool within your cluster, ensuring that applications have valid, up-to-date certificates without requiring manual intervention from administrators.

    This blog shows how to use self-signed certs in your cluster.

    Here is a Recipe to use trusted ceritifcates in your cluster:

    Install the Red Hat cert-manager Operator on OpenShift 4.20 using only the CLI and the official Red Hat catalog, follow these steps.

    This process involves creating a namespace, an OperatorGroup, and a Subscription to the Red Hat operator catalog.

    1. Create the Namespace

    First, create a dedicated namespace for the cert-manager operator. Red Hat recommends using cert-manager-operator.

    oc create namespace openshift-cert-manager-operator
    
    1. Create the OperatorGroup

    An OperatorGroup defines the multi-tenant configuration for the operator. In most cases for cert-manager, we configure it to watch all namespaces (cluster-wide).

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-cert-manager-operator
      namespace: openshift-cert-manager-operator
    spec:
      upgradeStrategy: Default
    EOF
    
    1. Create the Subscription

    The Subscription tells the Operator Lifecycle Manager (OLM) which operator to install, which catalog to use, and which channel to track. For OpenShift 4.20, we use the stable-v1 channel from the redhat-operators catalog.

    cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openshift-cert-manager-operator
      namespace: openshift-cert-manager-operator
    spec:
      channel: stable-v1
      installPlanApproval: Automatic
      name: openshift-cert-manager-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    EOF
    
    1. Verify the Installation

    It takes a few moments for OLM to process the subscription and deploy the pods. Run these commands to track the progress:

    • Check the Cluster Service Version (CSV) Status
    [root@rct-ocp-pra-fbac-bastion-0 ~]# oc get csv -n openshift-cert-manager-operator -w
    NAME                            DISPLAY                                       VERSION   REPLACES                        PHASE
    cert-manager-operator.v1.18.0   cert-manager Operator for Red Hat OpenShift   1.18.0    cert-manager-operator.v1.17.0   Installing
    cert-manager-operator.v1.18.0   cert-manager Operator for Red Hat OpenShift   1.18.0    cert-manager-operator.v1.17.0   Succeeded
    

    Wait until the Phase changes to Succeeded.

    • Check the Operator Pods
    oc get pods -n openshift-cert-manager-operator
    
    • Check the cert-manager Component Pods: Once the operator is running, it will automatically deploy the actual cert-manager components (controller, webhook, and CA injector) into a new namespace called cert-manager.
    [root@rct-ocp-pra-fbac-bastion-0 ~]# oc get pods -n cert-manager
    NAME                                      READY   STATUS    RESTARTS   AGE
    cert-manager-cainjector-758dbfb96-2qsc4   1/1     Running   0          57s
    cert-manager-cc4c8748-6xztc               1/1     Running   0          48s
    cert-manager-webhook-7949d5896-dqrfn      1/1     Running   0          57s
    

    To set up a self-signed issuer, you need to define a ClusterIssuer (available across the whole cluster) or an Issuer (restricted to a single namespace).

    In the context of cert-manager, a Self-Signed issuer is the simplest type; it doesn’t require an external CA or DNS validation. It is often used to bootstrap a Root CA for internal services.

    1. Create a Cluster-Wide Self-Signed Issuer

    Use this if you want any namespace in your OpenShift cluster to be able to request self-signed certificates.

    cat <<EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: selfsigned-cluster-issuer
    spec:
      selfSigned: {}
    EOF
    
    1. Verify the Issuer Status

    Once applied, check that the issuer is “Ready.”

    [root@rct-ocp-pra-fbac-bastion-0 ~]# oc get clusterissuer selfsigned-cluster-issuer
    NAME                        READY   AGE
    selfsigned-cluster-issuer   True    38s
    
    1. Create a Root CA Certificate

    This certificate will be signed by the ClusterIssuer and act as your internal CA.

    oc apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: internal-root-ca
      namespace: cert-manager
    spec:
      isCA: true
      commonName: internal-root-ca
      secretName: internal-root-ca-secret
      issuerRef:
        name: selfsigned-cluster-issuer
        kind: ClusterIssuer
    EOF
    
    1. Create the CA ClusterIssuer Now, create an issuer that uses that secret to sign other certificates (like your Webhook’s).
    oc apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: console-ca-issuer
      namespace: cert-manager
    spec:
      ca:
        secretName: internal-root-ca-secret
    EOF
    
    1. Create the service Certificate The dnsNames must match the Service name of your webhook.
    oc apply -f - <<EOF
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: console-demo-plugin-cert
      namespace: console-demo-plugin
    spec:
      dnsNames:
        - console-demo-plugin.console-demo-plugin.svc
        - console-demo-plugin.console-demo-plugin.svc.cluster.local
      secretName: console-serving-cert
      issuerRef:
        name: console-ca-issuer
        kind: ClusterIssuer
    EOF
    
    1. Extract the CA bundle from the secret to a local file
    oc get secret internal-root-ca-secret -n cert-manager \
      -o jsonpath='{.data.ca\.crt}' | base64 -d > internal-ca-bundle.crt
    
    1. Add to Cluster Trust store
    oc create configmap custom-ca \
      --from-file=ca-bundle.crt=internal-ca-bundle.crt \
      -n openshift-config
    
    1. Adding certificate authorities to the clust link
    oc patch proxy/cluster --type=merge \
      --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}'
    
    1. Delete the console pods and wait until it is restarted
    [root@rct-ocp-pra-fbac-bastion-0 ~]# oc delete pods -n openshift-console --all
    ...
    [root@rct-ocp-pra-fbac-bastion-0 ~]# oc get pods -n openshift-console
    NAME                         READY   STATUS    RESTARTS   AGE
    console-7554bb587c-hkqd4     1/1     Running   0          5m24s
    console-7554bb587c-jlxtd     1/1     Running   0          5m24s
    downloads-678b99d49c-8n64s   1/1     Running   0          5m24s
    downloads-678b99d49c-fgmgp   1/1     Running   0          5m24s
    

    You can now use the Certificates in your cluster to secure communication in your cluster.

  • Beyond the Static Dashboard: The Power of the Dynamic ConsolePlugin in OpenShift

    In the fast-evolving world of cloud-native platforms, a “one size fits all” user interface is no longer enough. As your ecosystem grows, your console needs to grow with it—without requiring a full platform reboot every time you want to add a new feature.

    Enter Dynamic Plugins. By shifting away from hardcoded UI components toward a flexible, runtime-loaded architecture, developers can now inject custom pages and extensions directly into the console on the fly. Leveraging the power of the Operator Lifecycle Manager (OLM), these plugins are delivered as self-contained micro-services that integrate seamlessly into your existing workflow. In this post, we’ll explore how this architecture turns your cluster console into a living, extensible platform.

    Here is the recipe to test the ConsolePlugin

    With the test setup and conversation, you’ll need to recompile the container image

    1. Setup the external route for the Image Registry
    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
    config.imageregistry.operator.openshift.io/cluster patched
    
    1. Check the OpenShift Image registry host and you see the hostname printed.
    $ oc get route default-route -n openshift-image-registry --template='{{.spec.host }}'
    default-route-openshift-image-registry.apps.rct-ocp-pra-fbac.ibm.com
    
    1. Make the local registry lookup use relative names
    $ oc set image-lookup  --all
    
    1. Set a temporary login
    export KUBECONFIG=~/local_config
    oc login -u kubeadmin -p $(cat openstack-upi/auth/kubeadmin-password) api.rct-ocp-pra-fbac.ibm.com:6443
    
    1. Login to the Registry (you must use )
    $ podman login --tls-verify=false -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps.rct-ocp-pra-fbac.ibm.com
    Login Succeeded
    
    1. Revert back to the default kubeconfig
    $ unset KUBECONFIG
    
    1. Create the test plugin
    oc new-project console-demo-plugin
    oc label namespace/console-demo-plugin security.openshift.io/scc.podSecurityLabelSync=false --overwrite=true
    oc label namespace/console-demo-pluginr pod-security.kubernetes.io/enforce=privileged --overwrite=true
    oc label namespace/console-demo-plugin pod-security.kubernetes.io/enforce-version=v1.24 --overwrite=true
    oc label namespace/console-demo-plugin pod-security.kubernetes.io/audit=privileged --overwrite=true
    oc label namespace/console-demo-plugin pod-security.kubernetes.io/warn=privileged --overwrite=true
    
    1. Clone the test repo
    git clone https://github.com/openshift/console-plugin-template
    cd console-plugin-template/
    
    1. Build Container Image
    $ oc project console-demo-plugin
    $ podman build -t $(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin -f Dockerfile .
    

    :warning: if the build stalls, add the ip of the primary interface to /etc/resolv.conf as a nameserver. For instance nameserver 10.20.184.190 is added as a newline.

    1. Push Container Image
    $ podman push --tls-verify=false $(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin
    
    1. Helm install the console-plugin-template
    $ helm upgrade -i console-plugin-template charts/openshift-console-plugin \
        -n console-demo-plugin \
        --set plugin.image=$(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin \
        --set plugin.jobs.patchConsoles.image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16816f988db21482c309e77207364b7c282a0fef96e6d7da129928aa477dcfa7
    

    In Conclusion: Seamless Extensibility Through Automation

    Dynamic plugins represent a major leap forward in UI flexibility. By utilizing OLM Operators to manage the underlying infrastructure, the process of extending a console is both automated and scalable. To recap the workflow:

    • Deployment: An Operator spins up a dedicated HTTP server and Kubernetes service to host the plugin’s assets.
    • Registration: The ConsolePlugin custom resource acts as the bridge, announcing the plugin’s presence to the system.
    • Activation: The cluster administrator retains ultimate control, enabling the plugin through the Console Operator configuration.

    This decoupled approach ensures that your console remains lightweight and stable while providing the “pluggable” freedom necessary for modern, customized cloud environments.

    Reference

    Dynamic Plugins in 4.20