Category: OpenShift

  • Great work from the IBM’s Power10 Private Cloud Rack for Db2 Warehouse team

    The IBM’s Power10 Private Cloud Rack for Db2 Warehouse team posted an article on their offering which is the next generation of the IBM Integrated Analytics System (IIAS); modernized to operate on the Red Hat OpenShift Container Platform. As the team notes, this architecture shift enables a more modular and scalable deployment model, aligning with modern cloud-native practices

    In their article, they outline the stringent performance and scalability, the use of OpenShift Container Platform on Power10 with Storage Scale. For more detailed information, you can visit the IBM Data Management Community blog

  • Multi-Arch Compute and the Red Hat OpenShift Container Platform on IBM Power

    Red Hat OpenShift Container Platform supports multi-arch compute which allow you to mix supported compute architectures so you can build your optimal solution. With multi-architecture compute, you run pairs of architectures in the compute plane – a Power (ppc64le) control plane supports running power and intel workers (p-px), and the Intel (amd64) control plane supports Power and intel workers (x-px). This setup uses a custom multi payload that is manifest listed so you can use the IBM Power (ppc64le) alongside Intel (amd64).

    In this document you will find a series of steps to setup a Multi-Arch Compute cluster.

    After you install your cluster, Multi-Arch Compute is a post installation task that follows this process:

    1. Prepare
    • Networking – ensure ports are configured, dhcp is configured, dns is configured (if you require it), load balancer
    • Prepare Cluster Services – create MachineConfigPool if you have different kernel parameters, add MachineConfigs, isolate the ingress on one architecture type
    • Prepare Ignition – download the latest ignition
    1. Image
    • Download Architecture specific Image
    • Load Image in Target Platform
    1. Ignite Workers
    • Start them up
    • Approve Node Bootstrapper
    • Issue Kubelet Certificate
    1. Post Startup
    • Add labels to the nodes

    By following these steps, you can successfully install Intel and Power workers in an OpenShift Cluster on IBM Power. This setup allows you to leverage the strengths of both architectures, providing a robust and flexible environment for your applications.

    Feel free to reach out if you have any questions or need further assistance with the installation process. Happy deploying!

    Reference

    1. https://community.ibm.com/community/user/blogs/paul-bastide/2024/02/20/multi-arch-compute-getting-started
  • Entering into Kubernetes Network Policies

    Kubernetes Network Policies (NetworkPolicy) Resources declaratively manage network access (ingress, egress) within a Kubernetes cluster. Network Policices identify the Pod labels, namespaces or IP blocks, definite the network traffic flow (Ingress, Egress), and the protocol/ports/ips involved – thus controlling allowed and disallowed communication.

    There are good examples on the kubernetes website https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource

    1. Identify the Pod to Secure, such as the Pod with label role=db. These should be as precise as possible. You may want to have more than one per your namespace.
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: test-network-policy
      namespace: xyz
    spec:
      podSelector:
        matchLabels:
          role: db
    
    1. Set a default deny policy, then add your allow policies per https://spacelift.io/blog/kubernetes-network-policy
    2. Add DNS UDP 53 to the Policy so you can dynamically lookup services in your cluster per https://snyk.io/blog/kubernetes-network-policy-best-practices/:
      egress: 
        - to:
            - namespaceSelector: {}
              podSelector:
                matchLabels:
                  dns.operator.openshift.io/daemonset-dns: default
          ports:
            - port: 53
              protocol: UDP

    Be sure to capture all of your anticipated traffic. If you get really advanced, you’ll want to use the Editor Network Policy

    Good luck…

    Reference

    1. NetworkPolicy v1 networking.k8s.io
    2. Editor Network Policy
  • Extending PCI-DSS v4 Support on Red Hat OpenShift Container Platform on IBM Power with the Compliance Operator

    The Compliance Operator is an optional tool within the OpenShift Container Platform that allows administrators to run compliance scans and recommend remediations to bring the cluster into compliance. It utilizes OpenSCAP, a NIST-certified tool, to describe and enforce security policies. The operator is configured to run a set of Platform and Node profiles that check the cluster and associate the checks with PCI-DSS controls ensuring comprehensive security and compliance.

    To support PCI-DSS v4, administrators can follow the detailed guide provided in the document “Supporting PCI-DSS v4 with the Compliance Operator on the OpenShift Container Platform”. The Power Developer Exchange article through the setup, running compliance scans, auto-remediation, and manual fixes required to configure the environment and facilitate compliance.

    Note, the security-profiles-operator-exists rule will be removed in future Compliance Operator releases.

    apiVersion: compliance.openshift.io/v1alpha1
    kind: TailoredProfile
    metadata:
      name: ocp4-pci-dss-custom
    spec:
      extends: ocp4-pci-dss
      title: PCI-DSS v4 Customized
      disableRules:
        - name: ocp4-pci-dss-security-profiles-operator-exists
          rationale: security profiles operator is not used in the control.
    

    You can see the details on CMP-3278: Misleading rule associated with PCI-DSS 6.4.2 and BSI

    Summary

    With the addition of PCI-DSS v4 support, the OpenShift Container Platform on IBM Power continues to enhance its security capabilities, making it an excellent choice for organizations processing credit card payments. By leveraging the Compliance Operator, administrators can ensure their clusters meet the necessary security standards, protecting sensitive payment card data effectively.

    Explore these resources for more detailed information on the Compliance Operator and its supported profiles.

    References

    1. Release notes
    2. Compliance Profiles
    3. Supporting PCI-DSS v4 with the Compliance Operator on the OpenShift Container Platform
  • Adding DISA-STIG Compliance Profiles for Red Hat OpenShift Container Platform on IBM Power

    With the release of Compliance Operator v1.7.0, Red Hat OpenShift Container Platform now supports DISA-STIG profiles for IBM Power. This update includes the rhcos4-disa-stig and ocp4-disa-stig profiles, adhering to the OSCAL format for version v2r2. These profiles ensure that your systems meet the stringent security requirements set by the Defense Information Systems Agency (DISA).

    Key Features

    1. Added Compliance Profiles for IBM Power: The ocp4-stigocp4-stig-node, and rhcos4-stig profiles are continuously updated to reflect the latest DISA-STIG benchmarks. This ensures that your systems remain compliant with the most current Defense Information Systems Agency Security Technical Implementation Guide.
    2. Version-Specific Profiles: For those needing to adhere to specific versions, such as DISA-STIG V2R1, the ocp4-stig-v2r1 and ocp4-stig-node-v2r1 profiles are available.

    For more detailed information, you can refer to the following resources:

    1. Release notes
    2. Compliance Profiles
    3. IBM Power Developer Exchange: Supporting DISA-STIG v2r2 with the Compliance Operator on the Red Hat OpenShift Container Platform with IBM Power
    4. IBM Power Developer Exchange: Supporting DISA-STIG with the Compliance Operator on the Red Hat OpenShift Container Platform

    Stay compliant and secure your cluster with the latest updates from Compliance Operator v1.7.0 and IBM Power Systems!

  • OpenShift… if you need a firewall

    If your security posture requires a firewall, you can add it to your OpenShift cluster using the following:

    1. Create a butane configuration
    cat << EOF > 98-nftables-worker.bu
    variant: openshift
    version: 4.16.0
    metadata:
      name: 98-nftables-worker
      labels:
        machineconfiguration.openshift.io/role: worker
    systemd:
      units:
        - name: "nftables.service"
          enabled: true
          contents: |
            [Unit]
            Description=Netfilter Tables
            Documentation=man:nft(8)
            Wants=network-pre.target
            Before=network-pre.target
            [Service]
            Type=oneshot
            ProtectSystem=full
            ProtectHome=true
            ExecStart=/sbin/nft -f /etc/sysconfig/nftables.conf
            ExecReload=/sbin/nft -f /etc/sysconfig/nftables.conf
            ExecStop=/sbin/nft 'add table inet custom_table; delete table inet custom_table'
            RemainAfterExit=yes
            [Install]
            WantedBy=multi-user.target
    storage:
      files:
      - path: /etc/sysconfig/nftables.conf
        mode: 0600
        overwrite: true
        contents:
          inline: |
            table inet custom_table
            delete table inet custom_table
            table inet custom_table {
                chain input {
                    type filter hook input priority 0; policy accept;
                    ip saddr 1.1.1.1/24 drop
                }
            }
    EOF
    
    1. Download butane
    curl -o butane https://github.com/coreos/butane/releases/download/v0.23.0/butane-ppc64le-unknown-linux-gnu -L
    
    1. Execute the butane

    chmod +x butane; ./butane 98-nftables-worker.bu -o 98-nftables-worker.yaml

    1. Run the nftables-worker.yaml butane
    oc apply -f 98-nftables-worker.yaml
    

    You can verify the workers drop the traffic.

    Reference

  • Cool Feature… NodeDisruptionPolicies

    I missed this feature in 4.17…. until I had to use it NodeDisruptionPolicies. If you are copying files over, you can avoid a MachineConfigPool reboot for files and services that depend on them. you can see more details Using node disruption policies to minimize disruption from machine config changes

    apiVersion: operator.openshift.io/v1
    kind: MachineConfiguration
    metadata:
      name: cluster
    spec:
      logLevel: Normal
      managementState: Managed
      operatorLogLevel: Normal
    status:
      nodeDisruptionPolicyStatus:
        clusterPolicies:
          files:
          - actions:
            - type: None
            path: /etc/mco/internal-registry-pull-secret.json
    

    Net… you can avoid a reboot when copying a file over/replacing a file and restarting a related service (already running).

    FYI I ran across it with relations to nftables.service https://access.redhat.com/articles/7090422

  • Red Hat OpenShift Container Platform on IBM Power Systems: Exploring Red Hat’s Multi-Arch Tuning Operator

    The Red Hat Multi-Arch Tuning Operator optimizes workload placement within multi-architecture compute clusters. Pods run on the compute architecture for which the containers declare support. Where Operators, Deployments, ReplicaSets, Jobs, CronJob, Pods don’t declare a nodeAffinity, in most cases, the Pods that are generate are updated with the node affinity so it lands on the supported (declared) CPU Architecture.

    For version 1.1.0, the Red Hat Multi-Arch Team, @Prashanth684@aleskandro@AnnaZivkovic and IBM Power Systems team @pkenchap have worked together to give cluster administrators better control and flexibility. The feature adds a plugins field in ClusterPodPlacementConfig and have build a first plugin called nodeAffinityScoring.

    Per the docs, the nodeAffinityScoring plugin adds weights and influence to the scheduler with this process:

    1. Analyzing the Pod’s containers for the supported architectures
    2. Generate the Scheduling predicates for nodeAffinity, e.g., 75 weight on ppc64le
    3. Filter out nodes that do not meet the Pod requirements, using the Predicates
    4. Prioritizes the remaining nodes based on the architecture scores defined in the nodeAffinityScoring.platforms field.

    To take advantages of this feature, use the following to asymmetrically load the Power nodes with work.

    apiVersion: multiarch.openshift.io/v1beta1
    kind: ClusterPodPlacementConfig
    metadata:
      name: cluster
    spec:
      logVerbosityLevel: Normal
      namespaceSelector:
        matchExpressions:
          - key: multiarch.openshift.io/exclude-pod-placement
            operator: Exists
      plugins:
        nodeAffinityScoring:
          enabled: true
          platforms:
            - architecture: ppc64le
              weight: 100
            - architecture: amd64
              weight: 50
    

    Best wishes, and looking forward to hearing how you use the Multi-Arch Tuning Operator on IBM Power with Multi-Arch Compute.

    References

    1. [RHOCP][TE] Multi-arch Tuning Operator: Cluster-wide architecture preferred/weighted affinity
    2. OpenShift 4.18 Docs: Chapter 4. Configuring multi-architecture compute machines on an OpenShift cluster
    3. OpenShift 4.18 Docs: 4.11. Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator
    4. Enhancement: Introducing the namespace-scoped PodPlacementConfig
  • nx-gzip requires active_mem_expansion_capable

    nx-gzip requires the licensed process caability active_mem_expansion_capable

    Login to your HMC

    for MACHINE in my-ranier1 my-ranier2
    do
    echo "MACHINE: ${MACHINE}"
    for CAPABILITY in $(lssyscfg -r sys -F capabilities -m "${MACHINE}" | sed 's|,| |g' | sed 's|"||g')
    do
    echo "CAPABILITY: ${CAPABILITY}" | grep active_mem_expansion_capable
    done
    echo
    done

    The following shows:

    MACHINE: my-ranier1
    CAPABILITY: active_mem_expansion_capable
    CAPABILITY: hardware_active_mem_expansion_capable
    CAPABILITY: active_mem_mirroring_hypervisor_capable
    CAPABILITY: cod_mem_capable
    CAPABILITY: huge_page_mem_capable
    CAPABILITY: persistent_mem_capable
    
    MACHINE: my-ranier2
    CAPABILITY: cod_mem_capable
    CAPABILITY: huge_page_mem_capable
    CAPABILITY: persistent_mem_capable

    Then you should be all set to use nx-gzip on my-ranier1

    Best wishes

  • Setting up nx-gzip in a non-privileged container

    *UPDATE: I also found I had to use the power-device-plugin*

    In order to use nx-gzip on Power Systems with a non-privileged container, use the following recipe:

    On each of the nodes, create the selinux power-nx-gzip.cil:

    (block nx
        (blockinherit container)
        (allow process container_file_t ( chr_file ( map )))
    )
    

    Install the CIL on each worker node

    sudo semodule -i power-nx-gzip.cil /usr/share/udica/templates/base_container.cil
    

    I ran the following:

    podman run -it --security-opt label=type:nx.process --device=/dev/crypto/nx-gzip registry.access.redhat.com/ubi9/ubi@sha256:a1804302f6f53e04cc1c6b20bc2204d5c9ae6e5a664174b38fbeeb30f7983d4e sh
    

    I copied the files into the container using the container CONTAINER ID:

    podman ps
    podman cp temp 6a4d967f3b6b:/tmp
    podman cp gzfht_test 6a4d967f3b6b:/tmp
    

    Then running in the container:

    sh-5.1# cd /tmp
    sh-5.1# ./gzfht_test temp
    file temp read, 1048576 bytes
    compressed 1048576 to 1105922 bytes total, crc32 checksum = 3c56f054
    

    You can use ausearch -m avc -ts recent | audit2allow to track down missing permissions

    Hope this helps you…

    Reference

    https://github.com/libnxz/power-gzip

    https://developers.redhat.com/articles/2025/04/11/my-advice-selinux-container-labeling