In the fast-evolving world of cloud-native platforms, a “one size fits all” user interface is no longer enough. As your ecosystem grows, your console needs to grow with it—without requiring a full platform reboot every time you want to add a new feature.
Enter Dynamic Plugins. By shifting away from hardcoded UI components toward a flexible, runtime-loaded architecture, developers can now inject custom pages and extensions directly into the console on the fly. Leveraging the power of the Operator Lifecycle Manager (OLM), these plugins are delivered as self-contained micro-services that integrate seamlessly into your existing workflow. In this post, we’ll explore how this architecture turns your cluster console into a living, extensible platform.
Here is the recipe to test the ConsolePlugin
With the test setup and conversation, you’ll need to recompile the container image
- Setup the external route for the Image Registry
$ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
config.imageregistry.operator.openshift.io/cluster patched
- Check the OpenShift Image registry host and you see the hostname printed.
$ oc get route default-route -n openshift-image-registry --template='{{.spec.host }}'
default-route-openshift-image-registry.apps.rct-ocp-pra-fbac.ibm.com
- Make the local registry lookup use relative names
$ oc set image-lookup --all
- Set a temporary login
export KUBECONFIG=~/local_config
oc login -u kubeadmin -p $(cat openstack-upi/auth/kubeadmin-password) api.rct-ocp-pra-fbac.ibm.com:6443
- Login to the Registry (you must use )
$ podman login --tls-verify=false -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps.rct-ocp-pra-fbac.ibm.com
Login Succeeded
- Revert back to the default kubeconfig
$ unset KUBECONFIG
- Create the test plugin
oc new-project console-demo-plugin
oc label namespace/console-demo-plugin security.openshift.io/scc.podSecurityLabelSync=false --overwrite=true
oc label namespace/console-demo-pluginr pod-security.kubernetes.io/enforce=privileged --overwrite=true
oc label namespace/console-demo-plugin pod-security.kubernetes.io/enforce-version=v1.24 --overwrite=true
oc label namespace/console-demo-plugin pod-security.kubernetes.io/audit=privileged --overwrite=true
oc label namespace/console-demo-plugin pod-security.kubernetes.io/warn=privileged --overwrite=true
- Clone the test repo
git clone https://github.com/openshift/console-plugin-template
cd console-plugin-template/
- Build Container Image
$ oc project console-demo-plugin
$ podman build -t $(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin -f Dockerfile .
:warning: if the build stalls, add the ip of the primary interface to /etc/resolv.conf as a nameserver. For instance nameserver 10.20.184.190 is added as a newline.
- Push Container Image
$ podman push --tls-verify=false $(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin
- Helm install the console-plugin-template
$ helm upgrade -i console-plugin-template charts/openshift-console-plugin \
-n console-demo-plugin \
--set plugin.image=$(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin \
--set plugin.jobs.patchConsoles.image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16816f988db21482c309e77207364b7c282a0fef96e6d7da129928aa477dcfa7
In Conclusion: Seamless Extensibility Through Automation
Dynamic plugins represent a major leap forward in UI flexibility. By utilizing OLM Operators to manage the underlying infrastructure, the process of extending a console is both automated and scalable. To recap the workflow:
- Deployment: An Operator spins up a dedicated HTTP server and Kubernetes service to host the plugin’s assets.
- Registration: The
ConsolePlugincustom resource acts as the bridge, announcing the plugin’s presence to the system. - Activation: The cluster administrator retains ultimate control, enabling the plugin through the Console Operator configuration.
This decoupled approach ensures that your console remains lightweight and stable while providing the “pluggable” freedom necessary for modern, customized cloud environments.
Reference