Category: IBM Power Systems

  • Notes from Testing SMB/CIFS CSI driver with OpenShift

    1. Login to the OpenShift cluster oc login. You’ll need to do this with a password not kubeconfig.
    2. Clone git clone https://github.com/prb112/openshift-samba
    3. Change to cd openshift-samba
    4. Create the Project oc new-project samba-test
    5. Update Project permissions
    oc label namespace/samba-test security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/enforce=privileged --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/audit=privileged --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/warn=privileged --overwrite
    
    1. Enable incluster resolution
    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
    
    1. Run ./enable-registry-and-push.sh
    $ ./enable-registry-and-push.sh
    === Image successfully pushed to OpenShift registry ===
    You can now use this image in your deployments with: default-route-openshift-image-registry.apps.kt-test-cp4ba-1174.powervs-openshift-ipi.cis.ibm.net/samba-test/samba:latest
    
    1. You can create the secret with oc create secret generic smbcreds --from-literal username=USERNAME --from-literal password="PASSWORD"
    2. Setup setup the SMB server:
    cat << EOF | oc apply -f -
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: smb-server
      namespace: samba-test
      labels:
        app: smb-server
    spec:
      type: ClusterIP
      selector:
        app: smb-server
      ports:
        - port: 445
          name: smb-server
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: smb-client-provisioner
      namespace: samba-test
    ---
    kind: StatefulSet
    apiVersion: apps/v1
    metadata:
      name: smb-server
      namespace: samba-test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: smb-server
      template:
        metadata:
          name: smb-server
          labels:
            app: smb-server
        spec:
          serviceAccountName: smb-client-provisioner
          nodeSelector:
            kubernetes.io/os: linux
            kubernetes.io/hostname: worker-0
          containers:
            - name: smb-server
              image: image-registry.openshift-image-registry.svc:5000/samba-test/samba:latest
              ports:
                - containerPort: 445
              securityContext:
                privileged: true
                capabilities:
                    add:
                    - CAP_SYS_ADMIN
                    - CAP_FOWNER
                    - NET_ADMIN
                    - SYS_ADMIN
                    drop:
                    - ALL
                runAsUser: 0
                runAsNonRoot: false
                readOnlyRootFilesystem: false
                allowPrivilegeEscalation: true
              volumeMounts:
                - mountPath: /export/smbshare
                  name: data-volume
          volumes:
            - name: data-volume
              hostPath:
                path: /var/smb
                type: DirectoryOrCreate
    EOF
    
    1. Set the permissions oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:samba-test:smb-client-provisioner and oc adm policy add-scc-to-user privileged -z smb-client-provisioner -n samba-test
    2. Kill the existing pods oc delete rs --all -n samba-test
    3. Reset the samba-test permissions
    oc rsh smb-server-0
    chmod -R 777 /export
    
    1. Check the connectivity works:
    # oc rsh smb-server-0
    $ smbclient //smb-server.samba-test.svc.cluster.local/data -U USERNAME --password=PASSWORD -W WORKGROUP
    $ mkdir /export/abcd
    
    1. Then you can create the SMB test using.
    cat <<EOF | oc apply -f -
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: smb
    provisioner: smb.csi.k8s.io
    parameters:
      source: //smb-server.samba-test.svc.cluster.local/data
      csi.storage.k8s.io/provisioner-secret-name: smbcreds
      csi.storage.k8s.io/provisioner-secret-namespace: samba-test
      csi.storage.k8s.io/node-stage-secret-name: smbcreds
      csi.storage.k8s.io/node-stage-secret-namespace: samba-test
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    mountOptions:
      - dir_mode=0777
      - file_mode=0777
      - uid=1001
      - gid=1001
      - noperm
      - mfsymlinks
      - cache=strict
      - noserverino  # required to prevent data corruption
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-smb-1005
      namespace: samba-test
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: smb
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-smb
      namespace: samba-test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx-smb
      template:
        metadata:
          labels:
            app: nginx-smb
        spec:
          containers:
            - image: registry.access.redhat.com/ubi10/nginx-126@sha256:8e282961aa38ee1070b69209af21e4905c2ca27719502e7eaa55925c016ecb76
              name: nginx-smb
              command:
                - "/bin/sh"
                - "-c"
                - while true; do echo $(date) >> /mnt/outfile; sleep 1; done
              volumeMounts:
                - name: smb01
                  mountPath: "/mnt"
                  readOnly: false
          volumes:
            - name: smb01
              persistentVolumeClaim:
                claimName: pvc-smb-1005
    EOF
    
    1. Find the test pod oc get pod -l app=nginx-smb
    2. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-6b55dc568-mbk9t
    $ dd if=/dev/random of=/mnt/testfile bs=1M count=10
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    
    1. Rollout restart
    # oc rollout restart deployment nginx-smb
    
    1. Find the test pod oc get pod -l app=nginx-smb
    2. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-64cbbb9f56-7zfv7
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    

    The sha256sum should agree with the first one.

    1. Restart the SMB Server
    oc rollout restart statefulset smb-server
    
    1. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-64cbbb9f56-7zfv7
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    

    These should all agree.

    That’s all for testing. (I tried it out on yoru system.)

  • Aside: Developing Applications Using Python Packages on IBM Power

    Janani Janakiraman posted Developing Applications Using Python Packages on IBM Power

    Are you an independent software vendor (ISV) or a customer looking to develop Python applications on the IBM Power platform? Then this blog is for you! It walks you through examples of using IBM’s Open Source Edge (OSE) and optimized, prebuilt Python wheels to accelerate development on IBM Power.


    IBM Power-optimized Python wheels are available via a DevPi repository, offering performance and compatibility benefits for AI/ML workloads. For best results, use Python versions 3.10–3.12 and set up a virtual environment with --prefer-binary and --extra-index-url to install packages from the IBM wheel repository.


    The OSE tool helps evaluate package availability and encourages community contributions to build scripts. Practical workflows are available in the pyeco GitHub repository, and troubleshooting tips for native libraries like libopenblas.so and libgfortran.so are included. Pinning package versions in requirements.txt ensures reproducibility and stability across environments.


    Community feedback is welcome—suggest packages, report issues, or contribute via GitHub to help grow the ecosystem!

    Please use these great resources.

  • Securing OpenShift UPI: Hardening DNS, HTTP, NFS, and SSL

    OpenShift UPI (User-Provisioned Infrastructure) offers flexibility and control, but with that comes the responsibility of securing the underlying services. In this post, we’ll walk through practical steps to lock down common services—DNS, HTTP, NFS, and SSL—to mitigate known vulnerabilities and improve your cluster’s security posture.


    🔐 DNS Server Hardening

    DNS is often overlooked, but it can be a rich source of information leakage and attack vectors. Here are four common DNS-related vulnerabilities and how to mitigate them:

    1. Cache Snooping – Remote Information Disclosure

    Attackers can infer what domains have been queried by your server.

    2. Recursive Query – Cache Poisoning Weakness

    Unrestricted recursion can allow attackers to poison your DNS cache.

    3. Spoofed Request – Amplification DDoS

    Open DNS resolvers can be abused for DDoS amplification attacks.

    4. Zone Transfer – Information Disclosure (AXFR)

    Misconfigured zone transfers can leak internal DNS data.

    ✅ Mitigation Script

    Use the following script to lock down named (BIND) and restrict access to trusted nodes only:

    # Backup
    cp /etc/named.conf /etc/named.conf-$(date +%s)
    
    # Remove bad includes
    if [[ $(grep -c "include /" /etc/named.conf) -eq 1 ]]; then
      grep -v -F -e "include /" /etc/named.conf > /etc/named.conf-temp
      cat /etc/named.conf-temp > /etc/named.conf
    fi
    
    # Add trusted include if missing
    if [[ $(grep -c 'include "/etc/named-trusted.conf";' /etc/named.conf) -eq 0 ]]; then
      echo 'include "/etc/named-trusted.conf";' >> /etc/named.conf
    fi
    
    # Build trusted ACL
    echo 'acl "trusted" {' > /etc/named-trusted.conf
    export KUBECONFIG=/root/openstack-upi/auth/kubeconfig
    for IP in $(oc get nodes -o wide --no-headers | awk '{print $6}'); do
      echo "  ${IP}/32;" >> /etc/named-trusted.conf
    done
    echo "  localhost;" >> /etc/named-trusted.conf
    echo "  localnets;" >> /etc/named-trusted.conf
    echo "};" >> /etc/named-trusted.conf
    

    🔧 Insert into named.conf after recursion yes;:

    allow-recursion { trusted; };
    allow-query-cache { trusted; };
    request-ixfr no;
    allow-transfer { none; };
    

    Then restart named to apply changes.


    🚫 HTTP TRACE / TRACK Methods

    TRACE and TRACK methods are legacy HTTP features that can be exploited for cross-site tracing (XST) attacks.

    ✅ Disable TRACE / TRACK

    Create /etc/httpd/conf.d/disable-track-trace.conf:

    RewriteEngine on
    RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
    RewriteRule .* - [F]
    

    Restart Apache:

    systemctl restart httpd
    
    
    
    

    📁 NFS Shares – World Readable Risk

    Exposing NFS shares to the world can lead to unauthorized access and data leakage.

    ✅ Lock NFS to Cluster Nodes

    echo "[NFS Exports Lock Down Started]"
    export KUBECONFIG=/root/openstack-upi/auth/kubeconfig
    cp /etc/exports /etc/exports-$(date +%s)
    echo "" > /etc/exports
    for IP in $(oc get nodes -o wide --no-headers | awk '{print $6}'); do
      echo "/export ${IP}(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports
    done
    echo "/export 127.0.0.1(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports
    exportfs -r
    

    🔐 SSL Certificates – CLI Access Challenges

    Managing SSL certificates for CLI access can be tricky, especially during updates.

    ✅ Recommendations

    • Use the Ingress Node Firewall Operator to restrict access to sensitive ports.
    • Monitor and rotate certificates regularly.
    • Validate CLI certificate chains and ensure proper trust anchors are configured.

    Final Thoughts

    Security in OpenShift UPI is not just about firewalls and RBAC—it’s about hardening every layer of the stack. By locking down DNS, HTTP, NFS, and SSL, you reduce your attack surface and protect your infrastructure from common threats.

  • Security Profiles Operator on OpenShift Container Platform on IBM Power

    :alert:*Security Profiles Operator (SPO)*:alert: simplifies security policy management for namespaced workloads and integrates seamlessly with OpenShift Container Platform’s compliance tooling. SPO manages *seccomp* and *SELinux* profiles as custom resources to keep workloads secure and compliant. The SPO features include:

    • *Creation and distribution* of seccomp and SELinux profiles
    • *Binding policies* to pods for fine-grained security control
    • *Recording workloads* to generate tailored profiles
    • *Synchronizing profiles* across worker nodes
    • *Advanced configuration*: log enrichment, webhook setup, metrics, and namespace restrictions

    You can install right from Operator Hub and use it on your OpenShift Container Platform on IBM Power. See https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/security_and_compliance/security-profiles-operator#spo-overview for detailed install instructions

  • Dynamic GOMAXPROCS

    Go 1.25 add container-ware GOMAXPROCS. Instead of assuming it has all available processors, go respects the cgroupv2 specified CPU limits. This feature ensures resources aren’t incorrectly used or killed for trying to access or use too much CPU.

    You can disable this feature using containermaxprocs=0 or tweaking it as you need (for instance only specifying 1 CPU when you have 2 or 8 threads available).

    Thanks to Karthik for the heads up….

    Go 1.25 Release Notes

  • FYI: Announcing watsonx.data on IBM Power Tech Demo Availability

    Power clients who are running solutions on the platform for business-critical data such as Oracle, Db2®, Db2 for i, and SAP HANA, and who want to remain on Power for their AI and analytics solutions, can do exactly that with watsonx.data on Power. That is why today we are announcing the availability of a Tech Demo of watsonx.data on IBM Power Virtual Server. You can register here or contact your IBM sales representative or IBM Business Partner to access watsonx.data on Power with Presto or Spark engines to execute SQL queries or build machine learning models using sample data stored in IBM Cloud Object Storage. IBM is committed to making watsonx.data available on-prem on Power processor-based servers by the end of the year to unify, govern, and active enterprise data at scale for AI and analytics.    

    You can learn more at the ibm site.

  • 🚀 Builds for OpenShift 1.5 is now GA!

    Now available on OpenShift 4.16–4.19, this release brings powerful new features for building container images natively on your cluster—including support for ppc64le!

    🔧 Highlights:

    • NodeSelector & Scheduler support via shp CLI
    • Shallow Git cloning for faster builds

    💡 Built on Shipwright, Builds 1.5 simplifies image creation with Kubernetes-native APIs, Buildah/S2I strategies, and full CLI + web console integration.

    Perfect for teams running on IBM Power Systems or running Multi-Architecture Compute clusters. Start building smarter, faster, and more consistently across all the architectures in your cluster.

    📘 Learn more: https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.5/html/release_notes/ob-release-notes

  • Great News… IBM has Open Source Wheel Packages for Linux on Power

    Priya Seth posted about Open Source Wheel Packages for Linux on Power:

    IBM provides a dedicated repository of Python wheel packages optimized for the Linux on Power (ppc64le) architecture. These pre-built binaries simplify Python development on Power systems by eliminating the need to compile packages from source—saving time and reducing complexity.

    Wheel files (.whl) are the standard for distributing pre-compiled Python packages. For developers working on Power architecture, having access to architecture-specific wheels ensures compatibility and speeds up development.

    IBM hosts a curated collection of open-source Python wheels for the ppc64le platform listed at https://open-source-edge.developerfirst.ibm.com/

    Use pip to download the package without installing it:

    pip download <package_name>==<version> --prefer-binary --index-url=https://wheels.developerfirst.ibm.com/ppc64le/linux --verbose --no-deps
    

    Replace <package_name> and <version> with the desired values.

    Whether you’re building AI models, data pipelines, or enterprise applications, this repository helps accelerate your Python development on Power.

    You can also refer to https://community.ibm.com/community/user/blogs/nikhil-kalbande/2025/08/01/install-wheels-from-ibm-python-wheel-repository

  • Playing with Container Lifecycle Hooks and ContainerStopSignals

    DRAFT This is not a complete article. I haven’t yet fully tested and vetted the steps I built. I will come back and hopefully update.

    Kubernetes orchestrates Pods across multiple nodes. When a Pod lands on a node, the Kubelet admits the Pod and its containers, and manages the lifecycle of the containers. When the Pod is terminated, the kubelet sends a SIGTERM signal to the running processes. In Kubernetes Enhancement – Container Stop Signals #4960, custom Pod stopSignal is allowed: spec.containers[].lifecycle.stopSignal and you can use one of sixty-five additional stop signals to stop the Pod. While behind a feature gate, you can see supportedStopSignalsLinux.

    For example, a user may use SIGQUIT signal to stop a container in the Pod. To do so with kind,

    1. Enable the ContainerStopSignals featuregate in a kind config called kind-cluster-config.yaml
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    featureGates:
      ContainerStopSignals: true
    nodes:
    - role: control-plane
      kubeadmConfigPatches:
      - |
        kind: ClusterConfiguration
        apiServer:
            extraArgs:
              v: "1"
        scheduler:
            extraArgs:
              v: "1"
        controllerManager:
            extraArgs:
              v: "1"
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            v: "1"
    - role: worker
      kubeadmConfigPatches:
      - |
        kind: JoinConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            v: "1"
    
    1. Download kind
    mkdir -p dev-cache
    GOBIN=$(PWD)/dev-cache/ go install sigs.k8s.io/kind@v0.29.0
    
    1. Start the kind cluster
    KIND_EXPERIMENTAL_PROVIDER=podman dev-cache/kind create cluster \
    		--image quay.io/powercloud/kind-node:v1.33.1 \
    		--name test \
    		--config kind-cluster-config.yaml\
    		--wait 5m
    
    1. Create a namespace
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        kubernetes.io/metadata.name: lifecycle-test
        pod-security.kubernetes.io/audit: restricted
        pod-security.kubernetes.io/audit-version: v1.24
        pod-security.kubernetes.io/enforce: restricted
        pod-security.kubernetes.io/warn: restricted
        pod-security.kubernetes.io/warn-version: v1.24
      name: lifecycle-test
    
    1. Create a Pod
    apiVersion: v1
    kind: Pod
    metadata:
      name: test
      namespace: lifecycle-test
    spec:
      containers:
      - name: test
        command: ["/bin/sh", "-c"]
        args:
          - function cleanup() { echo "CALLED SIGQUIT"; };
            trap cleanup SIGQUIT;
            sleep infinity
        image: registry.access.redhat.com/ubi9/ubi
        lifecycle:
          stopSignal: SIGQUIT
    
    1. Check kubectl describe pod/test -n lifecycle-test

    You’ve seen how this feature functions with Kubernetes and can take advantage of ContainerStopSignals in your environment.

    References

    1. Tracker: Kubernetes Enhancement – Container Stop Signals #4960 issue 30051
    2. KEP-4960: Container Stop Signals
    3. Kubernetes Documentation: Container Lifecycle Hooks
    4. An Introductory Guide to Managing the Kubernetes Pods Lifecycle
    5. Stop Signals
  • Great Job Team: Next-generation DataStage is now supported on IBM Power (ppc64le) with 5.2.0

    The IBM Team announced support for DataStage on IBM Power.

    IBM Cloud Pak for Data now supports the DataStage service on IBM Power servers. This means that you can run your data integration and extract, transform, and load (ETL) workloads directly on IBM Power, just like you already do on x86. With this update, it is easier than ever to use your existing Power infrastructure for modern data and AI projects.

    With the release of IBM DataStage 5.2.0, the DataStage service is now officially supported on IBM Power (ppc64le). This enables clients to run enterprise-grade ETL and data integration workloads on the Power platform, offering flexibility, performance, and consistency across architectures.

    See https://www.ibm.com/docs/en/software-hub/5.2.x?topic=requirements-ppc64le-hardware and https://community.ibm.com/community/user/blogs/yussuf-shaikh/2025/07/15/datastage-5-2-0-is-now-supported-on-ibm-power