Author: Paul Bastide

  • 🎉 A New Deployable Architecture variation Quickstart OpenShift for Power Virtual Server

     IBM Cloud introduces a new Deployable Architecture variation Quickstart OpenShift for the *Power Virtual Server with VPC landing zone* deployable architecture. This quickstart accelerates the deployment of an OpenShift cluster fully configured with IBM Cloud services.solution is perfect.

    For more information, go to OpenShift for the PowerVS with VPC Landing Zone Deployable Architecture and Power Virtual Server with VPC landing zone

  • 🎉 Multiarch Tuning Operator v1.2.0 is released

    The Multiarch Tuning Operator v1.2.0 is released. 1.2.0 continues to enhance the user experience for administrators of Openshift clusters with multi-architecture compute clusters.

    If you’ve ever run an Intel container on a Power node, v1.20 alerts you using an eBPF program that monitors for the ENOEXEC (aka Exec Format Error). It’s super helpful when you are migrating to IBM Power.

    You can install Multi-Arch Tuning Operator right from Operator Hub on any cluster 4.16 and higher.

    To enable the monitoring configure your global ClusterPodPlacementConfig:

    oc create -f - <<EOF
    apiVersion: multiarch.openshift.io/v1beta1
    kind: ClusterPodPlacementConfig
    metadata:
      name: cluster
    spec:
      logVerbosity: Normal
      namespaceSelector:
        matchExpressions:
          - key: multiarch.openshift.io/exclude-pod-placement
            operator: DoesNotExist
      plugins:
        execFormatErrorMonitor:
          enabled: true
    EOF
    

    References

    1. docs
    2. container
    3. Enhancement: MTO-0004-enoexec-monitoring.md
  • 🎉 Red Hat Build of Kueue v1.1 Now Available on IBM Power

    We’re excited to let you know that Red Hat Build of Kueue v1.1 is now available on IBM Power systems! This marks an important step in enabling AI and HPC workloads on IBM Power.

    A little background, Kueue is a Kubernetes-native job queueing system designed to manage workloads efficiently in shared clusters. It provides a set of APIs and controllers that act as a job-level manager, making intelligent decisions about:

    • When a job should start – allowing pods to be created when resources are available.
    • When a job should stop – ensuring active pods are deleted when the job completes or resources need to be reallocated.

    This approach helps organizations optimize resource utilization on IBM Power OpenShift clusters.

    Documentation is available at https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/ai_workloads/red-hat-build-of-kueue

  • Notes from Testing SMB/CIFS CSI driver with OpenShift

    1. Login to the OpenShift cluster oc login. You’ll need to do this with a password not kubeconfig.
    2. Clone git clone https://github.com/prb112/openshift-samba
    3. Change to cd openshift-samba
    4. Create the Project oc new-project samba-test
    5. Update Project permissions
    oc label namespace/samba-test security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/enforce=privileged --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/audit=privileged --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/warn=privileged --overwrite
    
    1. Enable incluster resolution
    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
    
    1. Run ./enable-registry-and-push.sh
    $ ./enable-registry-and-push.sh
    === Image successfully pushed to OpenShift registry ===
    You can now use this image in your deployments with: default-route-openshift-image-registry.apps.kt-test-cp4ba-1174.powervs-openshift-ipi.cis.ibm.net/samba-test/samba:latest
    
    1. You can create the secret with oc create secret generic smbcreds --from-literal username=USERNAME --from-literal password="PASSWORD"
    2. Setup setup the SMB server:
    cat << EOF | oc apply -f -
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: smb-server
      namespace: samba-test
      labels:
        app: smb-server
    spec:
      type: ClusterIP
      selector:
        app: smb-server
      ports:
        - port: 445
          name: smb-server
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: smb-client-provisioner
      namespace: samba-test
    ---
    kind: StatefulSet
    apiVersion: apps/v1
    metadata:
      name: smb-server
      namespace: samba-test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: smb-server
      template:
        metadata:
          name: smb-server
          labels:
            app: smb-server
        spec:
          serviceAccountName: smb-client-provisioner
          nodeSelector:
            kubernetes.io/os: linux
            kubernetes.io/hostname: worker-0
          containers:
            - name: smb-server
              image: image-registry.openshift-image-registry.svc:5000/samba-test/samba:latest
              ports:
                - containerPort: 445
              securityContext:
                privileged: true
                capabilities:
                    add:
                    - CAP_SYS_ADMIN
                    - CAP_FOWNER
                    - NET_ADMIN
                    - SYS_ADMIN
                    drop:
                    - ALL
                runAsUser: 0
                runAsNonRoot: false
                readOnlyRootFilesystem: false
                allowPrivilegeEscalation: true
              volumeMounts:
                - mountPath: /export/smbshare
                  name: data-volume
          volumes:
            - name: data-volume
              hostPath:
                path: /var/smb
                type: DirectoryOrCreate
    EOF
    
    1. Set the permissions oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:samba-test:smb-client-provisioner and oc adm policy add-scc-to-user privileged -z smb-client-provisioner -n samba-test
    2. Kill the existing pods oc delete rs --all -n samba-test
    3. Reset the samba-test permissions
    oc rsh smb-server-0
    chmod -R 777 /export
    
    1. Check the connectivity works:
    # oc rsh smb-server-0
    $ smbclient //smb-server.samba-test.svc.cluster.local/data -U USERNAME --password=PASSWORD -W WORKGROUP
    $ mkdir /export/abcd
    
    1. Then you can create the SMB test using.
    cat <<EOF | oc apply -f -
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: smb
    provisioner: smb.csi.k8s.io
    parameters:
      source: //smb-server.samba-test.svc.cluster.local/data
      csi.storage.k8s.io/provisioner-secret-name: smbcreds
      csi.storage.k8s.io/provisioner-secret-namespace: samba-test
      csi.storage.k8s.io/node-stage-secret-name: smbcreds
      csi.storage.k8s.io/node-stage-secret-namespace: samba-test
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    mountOptions:
      - dir_mode=0777
      - file_mode=0777
      - uid=1001
      - gid=1001
      - noperm
      - mfsymlinks
      - cache=strict
      - noserverino  # required to prevent data corruption
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-smb-1005
      namespace: samba-test
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: smb
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-smb
      namespace: samba-test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx-smb
      template:
        metadata:
          labels:
            app: nginx-smb
        spec:
          containers:
            - image: registry.access.redhat.com/ubi10/nginx-126@sha256:8e282961aa38ee1070b69209af21e4905c2ca27719502e7eaa55925c016ecb76
              name: nginx-smb
              command:
                - "/bin/sh"
                - "-c"
                - while true; do echo $(date) >> /mnt/outfile; sleep 1; done
              volumeMounts:
                - name: smb01
                  mountPath: "/mnt"
                  readOnly: false
          volumes:
            - name: smb01
              persistentVolumeClaim:
                claimName: pvc-smb-1005
    EOF
    
    1. Find the test pod oc get pod -l app=nginx-smb
    2. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-6b55dc568-mbk9t
    $ dd if=/dev/random of=/mnt/testfile bs=1M count=10
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    
    1. Rollout restart
    # oc rollout restart deployment nginx-smb
    
    1. Find the test pod oc get pod -l app=nginx-smb
    2. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-64cbbb9f56-7zfv7
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    

    The sha256sum should agree with the first one.

    1. Restart the SMB Server
    oc rollout restart statefulset smb-server
    
    1. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-64cbbb9f56-7zfv7
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    

    These should all agree.

    That’s all for testing. (I tried it out on yoru system.)

  • 🔥 Boost Your VS Code Workflow with a Custom Hotkey

    Sometimes, the smallest automation can make a big difference in your coding flow. If you frequently type . // >—maybe as part of a comment convention, markdown formatting, or a custom syntax—you can streamline your workflow by creating a hotkey in Visual Studio Code to insert it instantly.

    Here’s how to do it without installing any extensions.


    ✅ Step 1: Create a Keybinding to Trigger the Snippet

    While VS Code doesn’t allow direct keybinding to a named snippet, you can work around this by using the built-in editor.action.insertSnippet command with an inline snippet.

    🔧 How to Set It Up:

    1. Open the Command Palette:Ctrl+Shift+P (or Cmd+Shift+P on macOS)
    2. Type and select:Preferences: Open Keyboard Shortcuts (JSON)
    3. Add the following entry to your keybindings.json file:

    JSON

    [

    {

    “key”: “ctrl+shift+q”,

    “command”: “editor.action.insertSnippet”,

    “args”: {

    “snippet”: “. // >”

    },

    “when”: “editorTextFocus”

    }

    ]

    💡 You can change "ctrl+shift+q" to any key combination that suits your workflow.


    ✅ Step 2: Test It

    Now, whenever you’re focused in a text editor in VS Code and press Ctrl+Shift+Q, it will instantly insert:

    . // >
    

    No extensions. No fuss. Just a clean, efficient shortcut.


    🧠 Bonus Tip

    Want to scope this to specific file types like Markdown or Python? You can add a condition to the "when" clause, such as:

    JSON

    “when”: “editorTextFocus && editorLangId == ‘markdown’”

  • Aside: Developing Applications Using Python Packages on IBM Power

    Janani Janakiraman posted Developing Applications Using Python Packages on IBM Power

    Are you an independent software vendor (ISV) or a customer looking to develop Python applications on the IBM Power platform? Then this blog is for you! It walks you through examples of using IBM’s Open Source Edge (OSE) and optimized, prebuilt Python wheels to accelerate development on IBM Power.


    IBM Power-optimized Python wheels are available via a DevPi repository, offering performance and compatibility benefits for AI/ML workloads. For best results, use Python versions 3.10–3.12 and set up a virtual environment with --prefer-binary and --extra-index-url to install packages from the IBM wheel repository.


    The OSE tool helps evaluate package availability and encourages community contributions to build scripts. Practical workflows are available in the pyeco GitHub repository, and troubleshooting tips for native libraries like libopenblas.so and libgfortran.so are included. Pinning package versions in requirements.txt ensures reproducibility and stability across environments.


    Community feedback is welcome—suggest packages, report issues, or contribute via GitHub to help grow the ecosystem!

    Please use these great resources.

  • Securing OpenShift UPI: Hardening DNS, HTTP, NFS, and SSL

    OpenShift UPI (User-Provisioned Infrastructure) offers flexibility and control, but with that comes the responsibility of securing the underlying services. In this post, we’ll walk through practical steps to lock down common services—DNS, HTTP, NFS, and SSL—to mitigate known vulnerabilities and improve your cluster’s security posture.


    🔐 DNS Server Hardening

    DNS is often overlooked, but it can be a rich source of information leakage and attack vectors. Here are four common DNS-related vulnerabilities and how to mitigate them:

    1. Cache Snooping – Remote Information Disclosure

    Attackers can infer what domains have been queried by your server.

    2. Recursive Query – Cache Poisoning Weakness

    Unrestricted recursion can allow attackers to poison your DNS cache.

    3. Spoofed Request – Amplification DDoS

    Open DNS resolvers can be abused for DDoS amplification attacks.

    4. Zone Transfer – Information Disclosure (AXFR)

    Misconfigured zone transfers can leak internal DNS data.

    ✅ Mitigation Script

    Use the following script to lock down named (BIND) and restrict access to trusted nodes only:

    # Backup
    cp /etc/named.conf /etc/named.conf-$(date +%s)
    
    # Remove bad includes
    if [[ $(grep -c "include /" /etc/named.conf) -eq 1 ]]; then
      grep -v -F -e "include /" /etc/named.conf > /etc/named.conf-temp
      cat /etc/named.conf-temp > /etc/named.conf
    fi
    
    # Add trusted include if missing
    if [[ $(grep -c 'include "/etc/named-trusted.conf";' /etc/named.conf) -eq 0 ]]; then
      echo 'include "/etc/named-trusted.conf";' >> /etc/named.conf
    fi
    
    # Build trusted ACL
    echo 'acl "trusted" {' > /etc/named-trusted.conf
    export KUBECONFIG=/root/openstack-upi/auth/kubeconfig
    for IP in $(oc get nodes -o wide --no-headers | awk '{print $6}'); do
      echo "  ${IP}/32;" >> /etc/named-trusted.conf
    done
    echo "  localhost;" >> /etc/named-trusted.conf
    echo "  localnets;" >> /etc/named-trusted.conf
    echo "};" >> /etc/named-trusted.conf
    

    🔧 Insert into named.conf after recursion yes;:

    allow-recursion { trusted; };
    allow-query-cache { trusted; };
    request-ixfr no;
    allow-transfer { none; };
    

    Then restart named to apply changes.


    🚫 HTTP TRACE / TRACK Methods

    TRACE and TRACK methods are legacy HTTP features that can be exploited for cross-site tracing (XST) attacks.

    ✅ Disable TRACE / TRACK

    Create /etc/httpd/conf.d/disable-track-trace.conf:

    RewriteEngine on
    RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
    RewriteRule .* - [F]
    

    Restart Apache:

    systemctl restart httpd
    
    
    
    

    📁 NFS Shares – World Readable Risk

    Exposing NFS shares to the world can lead to unauthorized access and data leakage.

    ✅ Lock NFS to Cluster Nodes

    echo "[NFS Exports Lock Down Started]"
    export KUBECONFIG=/root/openstack-upi/auth/kubeconfig
    cp /etc/exports /etc/exports-$(date +%s)
    echo "" > /etc/exports
    for IP in $(oc get nodes -o wide --no-headers | awk '{print $6}'); do
      echo "/export ${IP}(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports
    done
    echo "/export 127.0.0.1(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports
    exportfs -r
    

    🔐 SSL Certificates – CLI Access Challenges

    Managing SSL certificates for CLI access can be tricky, especially during updates.

    ✅ Recommendations

    • Use the Ingress Node Firewall Operator to restrict access to sensitive ports.
    • Monitor and rotate certificates regularly.
    • Validate CLI certificate chains and ensure proper trust anchors are configured.

    Final Thoughts

    Security in OpenShift UPI is not just about firewalls and RBAC—it’s about hardening every layer of the stack. By locking down DNS, HTTP, NFS, and SSL, you reduce your attack surface and protect your infrastructure from common threats.

  • Security Profiles Operator on OpenShift Container Platform on IBM Power

    :alert:*Security Profiles Operator (SPO)*:alert: simplifies security policy management for namespaced workloads and integrates seamlessly with OpenShift Container Platform’s compliance tooling. SPO manages *seccomp* and *SELinux* profiles as custom resources to keep workloads secure and compliant. The SPO features include:

    • *Creation and distribution* of seccomp and SELinux profiles
    • *Binding policies* to pods for fine-grained security control
    • *Recording workloads* to generate tailored profiles
    • *Synchronizing profiles* across worker nodes
    • *Advanced configuration*: log enrichment, webhook setup, metrics, and namespace restrictions

    You can install right from Operator Hub and use it on your OpenShift Container Platform on IBM Power. See https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/security_and_compliance/security-profiles-operator#spo-overview for detailed install instructions

  • Dynamic GOMAXPROCS

    Go 1.25 add container-ware GOMAXPROCS. Instead of assuming it has all available processors, go respects the cgroupv2 specified CPU limits. This feature ensures resources aren’t incorrectly used or killed for trying to access or use too much CPU.

    You can disable this feature using containermaxprocs=0 or tweaking it as you need (for instance only specifying 1 CPU when you have 2 or 8 threads available).

    Thanks to Karthik for the heads up….

    Go 1.25 Release Notes

  • FYI: Announcing watsonx.data on IBM Power Tech Demo Availability

    Power clients who are running solutions on the platform for business-critical data such as Oracle, Db2®, Db2 for i, and SAP HANA, and who want to remain on Power for their AI and analytics solutions, can do exactly that with watsonx.data on Power. That is why today we are announcing the availability of a Tech Demo of watsonx.data on IBM Power Virtual Server. You can register here or contact your IBM sales representative or IBM Business Partner to access watsonx.data on Power with Presto or Spark engines to execute SQL queries or build machine learning models using sample data stored in IBM Cloud Object Storage. IBM is committed to making watsonx.data available on-prem on Power processor-based servers by the end of the year to unify, govern, and active enterprise data at scale for AI and analytics.    

    You can learn more at the ibm site.