Category: IBM Power Systems

  • Using procMount in your Kubernetes Pod

    Recently, I ran across Kubernetes Enhancement Proposal (KEP) 4265 where the authors update the Pod.spec.procMount capability to manage /proc visibility in a Pod’s security context. With this KEP moving to on-by default in v1.29.0, Unmasked disables masking and allows all paths in /proc (not just read-only).

    What this means is the Default procMount prevents containers from accessing sensitive kernel data or interacting with host-level processes. With this enhancement, you can run unprivileged containers inside a container (a container-in-a-container), build container images within a Pod, and use buildah in a Pod.

    The authors said it best in the KEP:

    The /proc filesystem is a virtual interface to kernel data structures. By default, Kubernetes instructs container runtimes to mask or restrict access to certain paths within /proc to prevent accidental or malicious exposure of host information. But this becomes problematic when users want to:

    Here is an example of creating a Pod:

    1. create the project
    oc new-project proc-mount-example
    
    1. Create the Pod
    cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: nested-container-builder
      namespace: proc-mount-example
    spec:
      securityContext:
        runAsUser: 0
      containers:
      - name: builder
        image: registry.access.redhat.com/ubi9/ubi
        securityContext:
          privileged: true
          procMount: Unmasked
        command: ["/bin/sh"]
        args: ["-c", "sleep 3600"]
    EOF
    
    1. Switch to terminal and install podman
    oc rsh nested-container-builder
    dnf install -y podman
    
    1. Change the Shell (so you know when the parent is in focus…)
    export PS1="parent-container# "
    podman run --name abcd --rm -it registry.access.redhat.com/ubi9/ubi sh
    
    1. Run a privileged command again
    parent-container# podman run --name abcd --rm -it registry.access.redhat.com/ubi9/ubi sh
    sh-5.1# dnf install -y podman
    
    1. Now Run another in the nested one, you’ll see a failure in the /dev/net/tun.
    sh-5.1# podman run --name abcd --rm -it registry.access.redhat.com/ubi9/ubi sh
    Trying to pull registry.access.redhat.com/ubi9/ubi:latest...
    Getting image source signatures
    Checking if image destination supports signatures
    Copying blob ea2f7ff2baa2 done   | 
    Copying config 4da9fa8b5a done   | 
    Writing manifest to image destination
    Storing signatures
    ERRO[0018] Preparing container d402a22ebe452597a83b3795639f86e333c1dbb142703737d6d705c6a6f445c7: setting up Pasta: pasta failed with exit code 1:
                    Failed to open() /dev/net/tun: No such file or directory
                                                                            Failed to set up tap device in namespace 
    Error: mounting storage for container d402a22ebe452597a83b3795639f86e333c1dbb142703737d6d705c6a6f445c7: creating overlay mount to /var/lib/containers/storage/overlay/ab589890d52b88e51f1f945b55d07ac465de1cefd2411d8fab33b4d2769c4404/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/K6CXJGRTW32MPWEIMAH4IGCNZ5,upperdir=/var/lib/containers/storage/overlay/ab589890d52b88e51f1f945b55d07ac465de1cefd2411d8fab33b4d2769c4404/diff,workdir=/var/lib/containers/storage/overlay/ab589890d52b88e51f1f945b55d07ac465de1cefd2411d8fab33b4d2769c4404/work,nodev,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
    fuse: device not found, try 'modprobe fuse' first
    fuse-overlayfs: cannot mount: No such file or directory
    : exit status 1
    

    It has the default access:

    • Default: Maintains the current behavior—masking sensitive /proc paths. If procMount is not specified, it defaults to Default, ensuring backward compatibility and preserving security for most workloads.
    • Unmasked: Bypasses the default masking, giving the container full access to /proc.

    Allowing unmasked access to /proc is a privileged operation. A container with root access and an unmasked /proc could potentially interact with the host system in dangerous ways. This powerful feature should be carefully used.

    Good luck.

    References

  • Configuring KubeletConfig for podsPerCore and maxPods

    I found a useful KubeletConfig.

    In Kubernetes, the podsPerCore parameter, when used in node configuration, specifies the maximum number of pods that can run on a node based on the number of its CPU cores. The default value for podsPerCore is 0, which essentially disables this limit, meaning there’s no constraint imposed based on the number of cores.

    You can check your current settings using:

    $ oc debug node/worker-1
    sh-4.4# chroot /host
    sh-4.4# cat /etc/kubernetes/kubelet.conf | grep maxPods
      "maxPods": 250,
    sh-4.4# cat /etc/kubernetes/kubelet.conf | grep podsPerCore
      "podsPerCore": 10,
    

    In you environment substitute worker-1 for a node name of a ndoe that belongs to your MachineConfigPool.

    You can change your configuration using:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: KubeletConfig
    metadata:
      name: set-max-pods-core 
    spec:
      machineConfigPoolSelector:
        matchLabels:
          pools.operator.machineconfiguration.openshift.io/worker: "" 
      kubeletConfig:
        podsPerCore: 10 
        maxPods: 250 
    

    Reference

    1. podsPerCore – https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/postinstallation_configuration/post-install-node-tasks
    2. defaults – https://access.redhat.com/solutions/6998814
  • Introducing the Open Source Edge for IBM Power

    Learn more about what the IBM Power team is doing with OpenSource.

    The IBM Power team is excited to introduce Open Source Edge for IBM Power, an evolution of our previous tool, the Open Source Power Availability Tool (OSPAT) for finding open source packages for Power. While OSPAT provided a snapshot of available packages that are updated periodically, Open Source Edge takes things further by offering more details, more currency, real-time data access, and interactive features to help you explore the open source resources that are available on Power.


    Open Source Edge offers all Power developers and users a central location to keep on top of the latest packages and their versions available for Linux on Power. While designing solutions, it is often critical to compose a solution with a variety of components, and in the software world, those components and their versions change rapidly. Also, with an increasingly turbulent security environment, understanding not just which versions are available, but the composition of those individual components, the individual security profile of each component, and having transparency into the build process and environment becomes increasingly critical.

    Read more at https://open-source-edge.developerfirst.ibm.com/ and https://community.ibm.com/community/user/blogs/hiro-miyamoto/2025/06/26/introducing-the-open-source-edge-for-ibm-power

  • Setting up Local Storage Operator and OpenShift Data Foundation on IBM Power

    Here are my notes from LSO and ODF setup on Power

    Install Local Storage Operator

    1. Log in to the OpenShift Web Console.
    2. Click Operators OperatorHub.
    3. Type local storage in the Filter by keyword…​ box to find the Local Storage Operator from the list of operators and click on it.
    4. Set the following options on the Install Operator page:
      1. Update channel as stable.
      1. Installation Mode as A specific namespace on the cluster.
      1. Installed Namespace as Operator recommended namespace openshift-local-storage.
      1. Approval Strategy as Automatic.
    5. Click Install.

    Setup Labels for Storage Nodes

    1. Add label to the workers which are being used by ODF, this should be the workers where the storage are attached
    oc get nodes -l node-role.kubernetes.io/worker= -oname \
        | xargs -I {} oc label {} cluster.ocs.openshift.io/openshift-storage=
    
    1. Rescan the scsi bus so we get all of the devices added.
    oc get nodes -l node-role.kubernetes.io/worker= -oname \
        | xargs -I {} oc debug {} -- chroot /host rescan-scsi-bus.sh 
    
    1. Build the by-id paths
    oc get nodes -l node-role.kubernetes.io/worker= -oname \
        | xargs -I {} oc debug {} -- chroot /host udevadm trigger --settle
    

    note: don’t worry about Failed to write 'change' to '/sys/devices/vio/4004/uevent', ignoring: No such device events.

    1. Discover the local volumes using LocalVolumeDiscovery
    cat << EOF | oc apply -f -
    apiVersion: local.storage.openshift.io/v1alpha1
    kind: LocalVolumeDiscovery
    metadata:
      name: auto-discover-devices
      namespace: openshift-local-storage
    spec:
      nodeSelector:
        nodeSelectorTerms:
        - matchExpressions:
          - key: cluster.ocs.openshift.io/openshift-storag
            operator: Exists
    EOF
    
    1. Check the LocalVolumeDiscovery is started and Discovering
    oc get LocalVolumeDiscovery -n openshift-local-storage -oyaml auto-discover-devices
    ...
    status:
      conditions:
      - lastTransitionTime: "2025-06-26T01:27:03Z"
        message: successfully running 3 out of 3 discovery daemons
        status: "True"
        type: Available
      observedGeneration: 2
      phase: Discovering
    
    1. Now that it’s ready, we’re going to find the disks:
    # oc get LocalVolumeDiscoveryResult -n openshift-local-storage -ojson | jq -r '.items[].status.discoveredDevices[] | select(.status.state == "Available" and .type == "disk").path' | sort -u
    /dev/sde
    /dev/sdf
    /dev/sdg
    /dev/sdh
    
    1. Create the Local Volume
    cat << EOF | oc apply -f -
    apiVersion: local.storage.openshift.io/v1
    kind: LocalVolume
    metadata:
      name: localblock
      namespace: openshift-local-storage
    spec:
      logLevel: Normal
      managementState: Managed
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
              - key: cluster.ocs.openshift.io/openshift-storage
                operator: Exists
      storageClassDevices:
        - devicePaths:
            - /dev/sde
            - /dev/sdf
            - /dev/sdg
            - /dev/sdh
          storageClassName: localblock
          volumeMode: Block
    EOF
    
    1. Check the LocalVolume is ready oc get LocalVolume -n openshift-local-storage -oyaml
      status:
        conditions:
        - lastTransitionTime: "2025-06-26T18:40:20Z"
          message: Ready
          status: "True"
          type: Available
        generations:
        - group: apps
          hash: ""
          lastGeneration: 2
          name: diskmaker-manager
          namespace: openshift-local-storage
          resource: DaemonSet
        managementState: Managed
        observedGeneration: 1
        readyReplicas: 0
    

    If ready, we can proceed.

    1. Navigate to ODF Operator.

    2. Click Create StorageSystem. Select localblock.

    You can rpoceed with setup from there.

    Thanks to T K Chasan

  • Kudos to the IBM Power team for enabling Maximo Application Suite on IBM Power.

    Kudos to the IBM Power team for enabling Maximo Application Suite on IBM Power.

    IBM has enabled Maximo Application Suite (MAS) 9.x on IBM Power (ppc64le), installable on Red Hat OpenShift 4.17+, offering a resilient, secure, and sustainable platform for enterprise asset management. MAS streamlines asset lifecycles, enhances reliability, and reduces operational costs. Running MAS on Power delivers 99.9999% availability, robust security, and lower energy consumption compared to competitive systems.

    References

    1. Blog https://community.ibm.com/community/user/blogs/julie-mathew/2025/06/24/enabling-maximo-application-suite-9x-on-ibm-power
    2. Availability of MAS 9.1 on IBM Power – Announcement letter
    3. MAS documentation What’s New in MAS 9.1
  • Expanding Open Source Access: GitHub Actions Now Available for IBM Power, IBM Z and IBM LinuxONE

    Exciting news from IBM… .IBM is bringing GitHub Actions runners to IBM Power, Z, and LinuxONE platforms—streamlining CI/CD for open-source projects across diverse architectures. This milestone empowers developers with seamless cross-platform automation, eliminating the need for multiple CI tools.

    IBM actively collaborating with open-source communities and offering personalized onboarding support. Join us in shaping the future of open development—explore our GitHub repo, contribute, and grow with us! 💻🌍

    For more information reach out at https://community.ibm.com/community/user/blogs/mick-tarsel/2025/06/23/github-actions-power-z

  • 🚀 What’s New in OpenShift Container Platform 4.19

    Red Hat OpenShift 4.19 is here! This release is based on Kubernetes 1.32, uses the CRI-O 1.32 runtime, and runs on RHEL CoreOS 9.6. It brings a lots of new features and enhancements across core platform capabilities and security.

    IBM Power users can deploy a highly performance cluster with excellent features and capabilities for their workload.

    Let’s take a look at what’s new:


    🔧 Core Platform Enhancements

    • Gateway API via OpenShift Service Mesh 3
      Now GA (Generally Available), this enables more flexible and powerful ingress management using the Gateway API.
    • OVN-Kubernetes BGP Support
      Coming soon in a 4.19.z update, this will enhance networking capabilities with BGP support.
    • On-cluster Image Mode
      This allows for more flexible image management directly within the OpenShift cluster.
    • cgroups v1 Removed
      OpenShift 4.19 fully transitions to cgroups v2, aligning with modern Linux standards.

    Security Improvements

    • Cert-manager Router Integration
      Now supports loading secrets directly into the router, simplifying certificate management.
    • Bring Your Own External OIDC
      Also in Tech Preview, this allows integration with external OpenID Connect providers for authentication.

    📺 Learn More

    Check out the official video overview:
    🎥 What’s New in OpenShift 4.19

    🔗 Key Resources


    Thanks for reading, and happy upgrading!

  • Outrigger: Rethinking Kubernetes Scheduling for a Smarter Future

    At DevConf.CZ 2025, a standout session from Alessandro Di Stefano and Prashanth Sundararaman introduced the Outrigger project, a forward-thinking initiative aimed at transforming Kubernetes scheduling into a dynamic, collaborative ecosystem. Building on the success of the Multiarch Tuning Operator for OpenShift, Outrigger leverages Kubernetes’ scheduling gates to go beyond traditional multi-architecture scheduling.

    👉 Watch the full session here:

    Excellent work by that team.

  • CP4D 5.2 release – IBM Knowledge Catalog (IKC) and DataStage are both now available on OpenShift on Power through Cloud Pak for Data

    Per the CP4D Leader, with CP4D 5.2 release – IBM Knowledge Catalog (IKC) and DataStage are both now available on OpenShift on Power through Cloud Pak for Data!

    – IBM Knowledge Catalog provides the methods that your enterprise needs to automate data governance so you can ensure data accessibility, trust, protection, security, and compliance

    – With DataStage, you can design and run data flows that move and transform data. You’re able to compose data flows with speed and accuracy using an intuitive graphical design interface that lets you connect to a wide range of data sources, integrate and transform data, and deliver it to your target system in batch or real time

    Read more about it at:

    1. https://www.ibm.com/docs/en/software-hub/5.2.x?topic=requirements-ppc64le-hardware#services

    2. https://community.ibm.com/community/user/blogs/jay-carman/2025/06/12/introducing-ibm-knowledge-catalog-on-ibm-power

  • prometheus hack to reduce disk pressure in non-prod environments

    Here is a script to reduce monitoring disk pressure. It prunes the db.

    cat << EOF > cluster-monitoring-config.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          retention: 1d
    EOF
    oc apply -f cluster-monitoring-config.yaml