Author: Paul

  • Source-to-Image (S2I) Builder Image Updated

    Red Hat has updated the Source-to-Image (S2I) Builder Image to v1.5.0. It now supports FIPS builds on IBM Power, see the release tag for more details tag

    You can learn more about using it at Source-to-image docs.

    Per the docs you can follow the instructions:

    1. Log in to the OpenShift Container Platform web console using your login credentials. The default view for the OpenShift Container Platform web console is the Administrator perspective.
    2. Use the perspective switcher to switch to the Developer perspective.
    3. In the +Add view, use the Project drop-down list to select an existing project or create a new project.
    4. Click All services in the Developer Catalog tile.
    5. Click Builder Images under Type to see the available S2I images.

    Good luck with your builds

  • mirror registry for Red Hat OpenShift

    The mirror registry for Red Hat OpenShift is a small-scale container registry included with OpenShift Container Platform subscriptions. As of 4Q 2024, you can now use it with ppc64le.

  • IBM API Connect is now available on OpenShift on Power through Cloud Pak for Integration

    Impressive work done by my colleagues to make API Connect available on IBM Power:

    IBM API Connect is now available on IBM Power. Running IBM API Connect on Red Hat OpenShift, clients can leverage the scalable API platform for creating, socializing, managing, and monetizing APIs as they modernize on IBM Power. Read the announcement to learn more: https://ibm.biz/BdGxhp

    #IBM #IBMPower #cp4i #RedHat #OpenShift #API #APIConnect #APIManagement

  • Updates to the Open Source Container images for Power now available in IBM Container Registry

    The IBM Linux on Power team updated the open source container images list on their IBM Container Registry (ICR). You can find out more at https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    • redis v7.4.1-bv podman pull icr.io/ppc64le-oss/redis-ppc64le:v7.4.1-bv Nov 21, 2024
    • mongodb 6.0.13-bv podman pull icr.io/ppc64le-oss/mongodb-ppc64le:6.0.13-bv Nov 21, 2024
    • rocketchat 6.11.1 MIT podman pull icr.io/ppc64le-oss/rocketchat-ppc64le:6.11.1 Nov 21, 202

    The milvus 2.4.11 container is added to the list of OpenSource Containers:

    podman pull icr.io/ppc64le-oss/milvus-ppc64le:v2.4.11
    
  • Red Hat OpenShift 4.17 Now Available on IBM Power

    With the third new release this year, Red Hat OpenShift4.17 is now generally available including for IBM® Power®. You can read the release notes here and find the guide for installing OpenShift 4.17 on Power here. This release builds on features included in Red Hat OpenShift 4.15 and 4.16, including an important update to multi-architecture compute that helps clients automate their modernization journeys with Power. Other updates and enhancements for clients deploying on Power focus on scalability, resource optimization, security, developer and system administrator productivity, and more. Here is an overview of key new features and improvements specifically relevant to Power: 

    https://community.ibm.com/community/user/power/blogs/brandon-pederson1/2024/11/13/red-hat-openshift-417-now-available-on-ibm-power?CommunityKey=f969f542-83d4-494b-b0b2-018965813d96

    Multiarch Tuning Operator 

    Included with Red Hat OpenShift 4.17 is an update to multi-architecture compute called the Multiarch Tuning Operator. The Multiarch Tuning Operator optimizes workload management across different architectures such as IBM Power, IBM Z, and x86, including single-architecture clusters transitioning to multi-architecture environments. It allows systems administrators to handle scheduling and resource allocation across these different architectures by ensuring workloads are correctly directed to the nodes of compatible architectures. The Multiarch Tuning Operator in OpenShift 4.17 further helps clients optimize resource allocation with policies that automatically place workloads on the most appropriate architecture. This also improves system administrator productivity and is especially useful with business-critical workloads that require high performance or need specific architecture capabilities, such as data-intensive applications often found running on Power.

  • scheduler-plugins has a new release v0.30.6

    The scheduler-plugins has a new release v0.30.6. This feature is used in concert with the Secondary Scheduler Operator

    • kube-scheduler registry.k8s.io/scheduler-plugins/kube-scheduler:v0.30.6

    This one aligns the k8s version – v1.30.6

  • IPI PowerVS with FIPS mode

    IPI with FIPS mode creates certificates that are FIPS compliant and makes sure the Nodes/Operators are using the proper cryptographic profiles.

    1. Confirm your host is in FIPS Mode and a RHEL9 equivalent stream.
    fips-mode-setup --check
    

    Note, you must reboot after enabling fips or this binary will not function.

    1. Download the oc
    # curl -O https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/openshift-client-linux-ppc64le-rhel9.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--   44 32.4M   44 14.3M    0     0  14.1M      0  0:00:02  0:00:01  0:00:01 1100 32.4M  100 32.4M    0     0  17.0M      0  0:00:01  0:00:01 --:--:-- 17.0M
    
    1. Extract the binary files
    # tar xvf openshift-client-linux-ppc64le-rhel9.tar.gz
    oc
    kubectl
    README.md
    

    You can optionally move the oc and kubectl files to /usr/local/bin/

    1. Download the ccoctl
    # curl -O https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/ccoctl-linux-rhel9.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--   44 32.4M   44 14.3M    0     0  14.1M      0  0:00:02  0:00:01  0:00:01 1100 32.4M  100 32.4M    0     0  17.0M      0  0:00:01  0:00:01 --:--:-- 17.0M
    
    1. Extract the ccoctl binary file
    # tar xvf ccoctl-linux-rhel9.tar.gz ccoctl
    ccoctl
    
    1. Change the permissions to make ccoctl executable by running the following command:
    # chmod 755 ccoctl
    
    1. Copy over your pull-secret.txt

    2. Get the Credentials Request pull spec from the release image https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/release.txt

    Pull From: quay.io/openshift-release-dev/ocp-release@sha256:6507d5a101294c670a283f5b56c5595fb1212bd6946b2c3fee01de2ef661625f
    
    1. Create the Credential Requests using the PullSpec
    # mkdir -p credreqs
    # oc adm release extract --cloud=powervs --credentials-requests quay.io/openshift-release-dev/ocp-release@sha256:6507d5a101294c670a283f5b56c5595fb1212bd6946b2c3fee01de2ef661625f --to=./credreqs -a pull-secret.txt
    ...
    Extracted release payload created at 2024-10-02T21:38:57Z
    
    1. Verify the credreqs are created. You should see files created:
    # ls credreqs/
    0000_26_cloud-controller-manager-operator_15_credentialsrequest-powervs.yaml
    0000_30_cluster-api_01_credentials-request.yaml
    0000_30_machine-api-operator_00_credentials-request.yaml
    0000_50_cluster-image-registry-operator_01-registry-credentials-request-powervs.yaml
    0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml
    0000_50_cluster-storage-operator_03_credentials_request_powervs.yaml
    
    1. Create the Credentials
    # export IBMCLOUD_API_KEY=<your ibmcloud apikey>
    # ./ccoctl ibmcloud create-service-id --credentials-requests-dir ./credreqs --name fips-svc --resource-group-name ocp-dev-resource-group
    2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-cloud-controller-manager-ibm-cloud-credentials-credentials.yaml
    2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-machine-api-powervs-credentials-credentials.yaml
    2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-image-registry-installer-cloud-credentials-credentials.yaml
    2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-ingress-operator-cloud-credentials-credentials.yaml
    2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-cluster-csi-drivers-ibm-powervs-cloud-credentials-credentials.yaml
    
    1. Download the latest installer
    curl -O https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/openshift-install-rhel9-ppc64le.tar.gz
    

    Note, with a FIPS host, you’ll want to use rhel9 as it supports FIPS https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/openshift-client-linux-ppc64le-rhel9.tar.gz

    1. Unarchive openshift-install-rhel9-ppc64le.tar.gz

    2. Create the install-config.yaml using openshift-install-fips create install-config per https://developer.ibm.com/tutorials/awb-deploy-ocp-on-power-vs-ipi/

    3. Edit install-config.yaml and add a new line at the end fips: true

    [root@fips-ocp-7219-bastion-0 t]# mkdir -p 20241031c
    [root@fips-ocp-7219-bastion-0 t]# cp install-config.yaml-old 20241031c/install-config.yaml
    
    1. Create the manifests openshift-install-fips create manifests
    # openshift-install-fips create manifests
    WARNING Release Image Architecture not detected. Release Image Architecture is unknown
    INFO Consuming Install Config from target directory
    INFO Adding clusters...
    INFO Manifests created in: cluster-api, manifests and openshift
    
    1. Copy the cred reqs into the right folder and confirm they are present
    # cp credreqs/manifests/openshift-*yaml 20241031c/openshift/
    # ls openshift/
    99_feature-gate.yaml                                            99_openshift-machineconfig_99-master-ssh.yaml
    99_kubeadmin-password-secret.yaml                               99_openshift-machineconfig_99-worker-fips.yaml
    99_openshift-cluster-api_master-machines-0.yaml                 99_openshift-machineconfig_99-worker-multipath.yaml
    99_openshift-cluster-api_master-machines-1.yaml                 99_openshift-machineconfig_99-worker-ssh.yaml
    99_openshift-cluster-api_master-machines-2.yaml                 openshift-cloud-controller-manager-ibm-cloud-credentials-credentials.yaml
    99_openshift-cluster-api_master-user-data-secret.yaml           openshift-cluster-csi-drivers-ibm-powervs-cloud-credentials-credentials.yaml
    99_openshift-cluster-api_worker-machineset-0.yaml               openshift-config-secret-pull-secret.yaml
    99_openshift-cluster-api_worker-user-data-secret.yaml           openshift-image-registry-installer-cloud-credentials-credentials.yaml
    99_openshift-machine-api_master-control-plane-machine-set.yaml  openshift-ingress-operator-cloud-credentials-credentials.yaml
    99_openshift-machineconfig_99-master-fips.yaml                  openshift-install-manifests.yaml
    99_openshift-machineconfig_99-master-multipath.yaml             openshift-machine-api-powervs-credentials-credentials.yaml
    
    1. Create the cluster BASE_DOMAIN=powervs-openshift-ipi.cis.ibm.net RELEASE_ARCHITECTURE="ppc64le" openshift-install-fips create cluster
    INFO Creating infrastructure resources...
    INFO Started local control plane with envtest
    INFO Stored kubeconfig for envtest in: /root/install/t/20241031c/.clusterapi_output/envtest.kubeconfig
    INFO Running process: Cluster API with args [-v=2 --diagnostics-address=0 --health-addr=127.0.0.1:45201 --webhook-port=40159 --webhook-cert-dir=/tmp/envtest-serving-certs-1721884268 --kubeconfig=/root/install/t/20241031c/.clusterapi_output/envtest.kubeconfig]
    INFO Running process: ibmcloud infrastructure provider with args [--provider-id-fmt=v2 --v=5 --health-addr=127.0.0.1:37207 --webhook-port=35963 --webhook-cert-dir=/tmp/envtest-serving-certs-3500602992 --kubeconfig=/root/install/t/20241031c/.clusterapi_output/envtest.kubeconfig]
    INFO Creating infra manifests...
    INFO Created manifest *v1.Namespace, namespace= name=openshift-cluster-api-guests
    INFO Created manifest *v1beta1.Cluster, namespace=openshift-cluster-api-guests name=fips-fd4f6
    INFO Created manifest *v1beta2.IBMPowerVSCluster, namespace=openshift-cluster-api-guests name=fips-fd4f6
    INFO Created manifest *v1beta2.IBMPowerVSImage, namespace=openshift-cluster-api-guests name=rhcos-fips-fd4f6
    INFO Done creating infra manifests
    INFO Creating kubeconfig entry for capi cluster fips-fd4f6
    INFO Waiting up to 30m0s (until 9:06AM EDT) for network infrastructure to become ready...
    INFO Network infrastructure is ready
    INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-bootstrap
    INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-0
    INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-1
    INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-2
    INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-bootstrap
    INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-0
    INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-1
    INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-2
    INFO Created manifest *v1.Secret, namespace=openshift-cluster-api-guests name=fips-fd4f6-bootstrap
    INFO Created manifest *v1.Secret, namespace=openshift-cluster-api-guests name=fips-fd4f6-master
    INFO Waiting up to 15m0s (until 9:02AM EDT) for machines [fips-fd4f6-bootstrap fips-fd4f6-master-0 fips-fd4f6-master-1 fips-fd4f6-master-2] to provision...
    INFO Control-plane machines are ready
    INFO Cluster API resources have been created. Waiting for cluster to become ready...
    INFO Consuming Cluster API Manifests from target directory
    INFO Consuming Cluster API Machine Manifests from target directory
    INFO Waiting up to 20m0s (until 9:21AM EDT) for the Kubernetes API at https://api.fips.powervs-openshift-ipi.cis.ibm.net:6443...
    INFO API v1.31.1 up
    INFO Waiting up to 45m0s (until 9:47AM EDT) for bootstrapping to complete...
    INFO Destroying the bootstrap resources...
    INFO Waiting up to 5m0s for bootstrap machine deletion openshift-cluster-api-guests/fips-fd4f6-bootstrap...
    INFO Shutting down local Cluster API controllers...
    INFO Stopped controller: Cluster API
    INFO Stopped controller: ibmcloud infrastructure provider
    INFO Shutting down local Cluster API control plane...
    INFO Local Cluster API system has completed operations
    INFO no post-destroy requirements for the powervs provider
    INFO Finished destroying bootstrap resources
    INFO Waiting up to 40m0s (until 10:16AM EDT) for the cluster at https://api.fips.powervs-openshift-ipi.cis.ibm.net:6443 to initialize...
    

    If you have any doubts, you can start a second terminal session and use the kubeconfig to verify access:

    # oc --kubeconfig=auth/kubeconfig get nodes
    NAME                      STATUS   ROLES                  AGE     VERSION
    fips-fd4f6-master-0       Ready    control-plane,master   41m     v1.31.1
    fips-fd4f6-master-1       Ready    control-plane,master   41m     v1.31.1
    fips-fd4f6-master-2       Ready    control-plane,master   41m     v1.31.1
    fips-fd4f6-worker-srwf2   Ready    worker                 7m37s   v1.31.1
    fips-fd4f6-worker-tc28p   Ready    worker                 7m13s   v1.31.1
    fips-fd4f6-worker-vrlrq   Ready    worker                 7m12s   v1.31.1
    

    You can also check oc --kubeconfig=auth/kubeconfig get co

    19. When it’s complete you can login and use your fips enabled cluster

  • Recommended: How oc-mirror version 2 enables disconnected installations in OpenShift 4.16

    This is a recommended article on oc-mirror and getting started with a fundamental tool in OpenShift.

    https://developers.redhat.com/articles/2024/10/14/how-oc-mirror-version-2-enables-disconnected-installations-openshift-416

    This guide demonstrates the use of oc-mirror v2 to assist in populating a local Red Hat Quay registry that will be used for a disconnected installation, and includes the steps used to configure openshift-marketplace to use catalog sources that point to the local Red Hat Quay registry.

  • Coming to Grips with Linux Pressure Stall Information

    The Linux Pressure Stall Information, as part of the Control Group v2, provides an accurate accounting of a containers cpu, memory and io. The psi stats allow accurate and limited access to resources – no over-committing and no over-sizing.

    However, it sometimes is difficult to see if the a container is being limited and could use more resources assigned.

    This article is designed to help you diagnose and check your pods so you can get the best out of your workloads.

    Check your workload

    You can check the container in your Pod’s cpu.stat:

    1. Find the containerId
    [root@cpi-c7b2-bastion-0 ~]# oc get pod -n test test-pod -oyaml | grep -i containerID
      - containerID: cri-o://c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea
    
    1. Connect into the Pod.
    [root@cpi-c7b2-bastion-0 ~]# oc rsh -n test test-pod
    sh-4.4# find /sys -iname '*c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea*'
    /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4b90d9_20f9_427d_9414_9964f32379dc.slice/crio-c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea.scope
    /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4b90d9_20f9_427d_9414_9964f32379dc.slice/crio-conmon-c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea.scope
    
    1. Check the cpu.stat or io.stat or memory.stat.
    /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4b90d9_20f9_427d_9414_9964f32379dc.slice/crio-conmon-c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea.scope/cpu.stat
    usage_usec 11628232854
    user_usec 8689145332
    system_usec 2939087521
    core_sched.force_idle_usec 0
    nr_periods 340955
    nr_throttled 8
    throttled_usec 8012
    nr_bursts 0
    burst_usec 0
    
    1. We can see that the cpu is being throttled in nr_throttled and throttled_usec. This is really a minor impact for a container.
    nr_throttled 8
    throttled_usec 8012
    

    If the container had a higher number of throttled events, you want to check the number of cpus or memory that your container is limited to, such as:

    nr_throttled 103
    throttled_usec 22929315
    
    1. Check the container limits.
    ❯ NS=test
    ❯ POD=test-pod
    ❯ oc get -n ${NS} pod ${POD} -ojson | jq -r '.spec.containers[].resources.limits.cpu'
    8
    
    1. Patch your Pod or update your application to increase the cpus.

    Checking real-time stats

    You can check the real-time stats top for your container pressure. Log on to your host.

    find /sys/fs/cgroup/kubepods.slice/ -iname cpu.pressure  | xargs -t -I {} cat {} | grep -v total=0
    find /sys/fs/cgroup/kubepods.slice/ -iname memory.pressure  | xargs -t -I {} cat {} | grep -v total=0
    find /sys/fs/cgroup/kubepods.slice/ -iname io.pressure  | xargs -t -I {} cat {} | grep -v total=0
    

    This will show you all the pods that are under pressure.

    for PRESSURE in $( find /sys/fs/cgroup/kubepods.slice/ -iname io.pressure)
    do
        if [ ! -z "$(cat ${PRESSURE} | grep -v total=0)" ]
        then
            if [ ! -z "$(cat ${PRESSURE} | grep -v "avg10=0.00 avg60=0.00 avg300=0.00")" ]
            then
                echo ${PRESSURE}
            fi
        fi
    done
    
    ❯ cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde03ef16_000a_4198_9e04_ac96d0ea33c5.slice/crio-d200161683a680588c4de8346ff58d633201eae2ffd558c8d707c4836215645e.scope/io.pressure
    some avg10=14.02 avg60=14.16 avg300=13.99 total=4121355556
    full avg10=14.02 avg60=14.16 avg300=13.99 total=4121050788
    

    In this case, I was able to go in and icnrease the total IO.

    Tweak

    You can tweak the cpu.pressure settings temporarily for a pod or system so the time used to evaluate is extended (this is the longest time possible).

    The maximum window size is 10 seconds, and if you have kernel version less than 6.5 then the minimum window size is 500ms.

    cat << EOF > /sys/fs/cgroup/cpu.pressure
    some 10000000 10000000
    full 10000000 10000000
    EOF
    

    Disabling psi in OpenShift

    There are two methods to disable psi in OpenShift, the first is to set a kernel parameter, and the second is to switch from cgroupsv2 to cgroups.

    Switch from cgroupsv2 to cgroups

    You can switch from cgroupsv2 to cgroups – Configuring the Linux cgroup version on your nodes.

    ❯ oc patch nodes.config cluster --type merge -p '{"spec": {"cgroupMode": "v1"}}'
    

    You’ll have to wait for each of the Nodes to restart.

    Set the Kernel Parameter psi=0

    In OpenShift, you can disable psi in using a MachineConfig.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-psi-disable
    spec:
      kernelArguments:
      - psi=0
    

    Check psi is enabled

    You can check to see if it is enabled by checking one of the cpu.pressure, io.pressure or memory.pressure files. You’ll see “Operation not supported”.

    sh-5.1# cat /sys/fs/cgroup/cpu.pressure
    cat: /sys/fs/cgroup/cpu.pressure: Operation not supported
    

    or

    oc debug node/<node_name>
    chroot /host
    stat -c %T -f /sys/fs/cgroup
    tmpfs
    

    Summary

    Linux PSI is pretty awesome. However, you should check your workload and verify it’s running correctly.

    References