Tag: openshift

  • Setting up nfs-provisioner on OpenShift on Power Systems with a template

    Here are my notes for setting up the SIG’s nfs-provisioner. You should follow these directions to setup the nfs-provisioner kubernetes-sigs/nfs-subdir-external-provisioner.

    1. If you haven’t already, you need to create the nfs-provisioner namespace.

    a. Create the namespace

    oc new-project nfs-provisioner

    b. Annotate the namespace with elevated privileges so we can create NFS mounts

    # oc label namespace/nfs-provisioner security.openshift.io/scc.podSecurityLabelSync=false --overwrite=true
    namespace/nfs-provisioner labeled
    # oc label namespace/nfs-provisioner pod-security.kubernetes.io/enforce=privileged --overwrite=true
    namespace/nfs-provisioner labeled
    # oc label namespace/nfs-provisioner pod-security.kubernetes.io/enforce-version=v1.24 --overwrite=true
    namespace/nfs-provisioner labeled
    # oc label namespace/nfs-provisioner pod- security.kubernetes.io/audit=privileged --overwrite=true
    namespace/nfs-provisioner labeled
    # oc label namespace/nfs-provisioner pod-security.kubernetes.io/warn=privileged --overwrite=true
    namespace/nfs-provisioner labeled
    1. Download the storage-class-nfs-template
    # curl -O -L https://github.com/IBM/ocp4-power-workload-tools/manifests/storage/storage-class-nfs-template.yaml
    1. Setup Authorization
    oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:nfs-provisioner:nfs-client-provisioner
    • Process the template with the NFS_PATH and NFS_SERVER
    # oc process -f storage-class-nfs-template.yaml -p NFS_PATH=/data -p NFS_SERVER=10.17.2.138 | oc apply -f –
    
    deployment.apps/nfs-client-provisioner created
    serviceaccount/nfs-client-provisioner created
    clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
    clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
    role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
    rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
    storageclass.storage.k8s.io/nfs-client created
    1. Get the pods
    oc get pods
    NAME                                     READY   STATUS    RESTARTS   AGE
    nfs-client-provisioner-b8764c6bb-mjnq9   1/1     Running   0          36s
    1. Check the storage class… You should see the nfs-client listed. This is the default.

    ❯ oc get sc

    NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE

    nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  3m27s

    If you see more than the nfs-client listed, you may have to change the defaults.

    oc patch storageclass storageclass-name -p ‘{“metadata”: {“annotations”: {“storageclass.kubernetes.io/is-default-class”: “false”}}}’

  • April 2024 Updates

    Here are some updates for April 2024.

    FYI: I was made aware of kubernetes-sigs/kube-scheduler-simulator and the release simulator/v0.2.0.

    That’s why we are developing a simulator for kube-scheduler — you can try out the behavior of the scheduler while checking which plugin made what decision for which Node.

    https://github.com/kubernetes-sigs/kube-scheduler-simulator/tree/simulator/v0.2.0

    The Linux on Power Team added three new Power supported containers.

    cassandra 	4.1.3 	docker pull icr.io/ppc64le-oss/cassandra-ppc64le:4.1.3 	April 2, 2024
    milvus 	v2.3.3 	docker pull icr.io/ppc64le-oss/milvus-ppc64le:v2.3.3 	April 2, 2024
    rust 	1.66.1 	docker pull icr.io/ppc64le-oss/rust-ppc64le:1.66.1 	April 2, 2024
    mongodb 5.0.26 April 9, 2024 docker pull icr.io/ppc64le-oss/mongodb-ppc64le:5.0.26
    mongodb 6.0.13 April 9, 2024 docker pull icr.io/ppc64le-oss/mongodb-ppc64le:6.0.13
    logstash 8.11.3 April 9, 2024 docker pull icr.io/ppc64le-oss/logstash-ppc64le:8.11.3 
    
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Added a new fix for imagestream set schedule

    https://gist.github.com/prb112/838d8c2ae908b496f5d5480411a7d692

    An article worth rekindling in our memories…

    Optimal LPAR placement for a Red Hat OpenShift cluster within IBM PowerVM

    Optimal logical partition (LPAR) placement can be important to improve the performance of workloads as this can favor efficient use of the memory and CPU resources on the system. However, for certain configuration and settings such as I/O devices allocation to the partition, amount of memory allocation, CPU entitlement to the partition, and so on we might not get a desired LPAR placement. In such situations, the technique described in this blog can enable you to place the LPAR in a desired optimal configuration.

    https://community.ibm.com/community/user/powerdeveloper/blogs/mel-bakhshi/2022/08/11/openshift-lpar-placement-powervm

    There is an updated list Red Hat products supporting IBM Power.

    https://community.ibm.com/community/user/powerdeveloper/blogs/ashwini-sule/2024/04/05/red-hat-products-mar-2024

    Enhancing container security with Aqua Trivy on IBM Power

    … IBM Power development team found that Trivy is as effective as other open source scanners in detecting vulnerabilities. Not only does Trivy prove to be suitable for container security in IBM Power clients’ DevSecOps pipelines, but the scanning process is simple. IBM Power’s support for Aqua Trivy underscores its industry recognition for its efficacy as an open source scanner.

    https://community.ibm.com/community/user/powerdeveloper/blogs/jenna-murillo/2024/04/08/enhanced-container-security-with-trivy-on-power

    Podman 5.0 is released

    https://blog.podman.io/2024/03/podman-5-0-has-been-released/
  • Replay: Getting started with Multi-Arch Compute workloads with your Red Hat OpenShift cluster

    I presented on:

    The Red Hat OpenShift Container Platform runs on IBM Power systems, offering a secure and reliable foundation for modernizing applications and running containerized workloads.

    Multi-Arch Compute for OpenShift Container Platform lets you use a pair of compute architectures, such as ppc64le and amd64, within a single cluster. This exciting feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures.

    Join Paul Bastide, IBM Senior Software Engineer, as he introduces the background behind Multi-Arch Compute and then gets you started setting up, configuring, and scheduling workloads. After, Paul will take you through a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.

    Go here to see the download https://ibm.webcasts.com/starthere.jsp?ei=1660167&tp_key=ddb6b00dbd&_gl=11snjgp3_gaMjk3MzQzNDU1LjE3MTI4NTQ3NzA._ga_FYECCCS21D*MTcxMjg1NDc2OS4xLjAuMTcxMjg1NDc2OS4wLjAuMA..&_ga=2.141469425.2128302208.1712854770-297343455.1712854770

  • Red Hat OpenShift Multi-Architecture Compute – Demo MicroServices on IBM Power Systems

    Shows a Microservices Application running on Red Hat OpenShift Control Plane on IBM Power Systems with an Intel Worke

  • Updates for End of March 2024

    Here are some great updates for the first half of April 2024.

    Sizing and configuring an LPAR for AI workloads

    Sebastian Lehrig has a great introduction into CPU/AI/NUMA on Power10.

    https://community.ibm.com/community/user/powerdeveloper/blogs/sebastian-lehrig/2024/03/26/sizing-for-ai

    FYI: a new article is published – Improving the User Experience for Multi-Architecture Compute on IBM Power

    More and more IBM® Power® clients are modernizing securely with lower risk and faster time to value with cloud-native microservices on Red Hat® OpenShift® running alongside their existing banking and industry applications on AIX, IBM i, and Linux. With the availability of Red Hat OpenShift 4.15 on March 19th, Red Hat and IBM introduced a long-awaited innovation called Multi-Architecture Compute that enables clients to mix Power and x86 worker nodes in a single Red Hat OpenShift cluster. With the release of Red Hat OpenShift 4.15, clients can now run the control plane for a Multi-Architecture Compute cluster natively on Power.

    Some tips for setting up a Multi-Arch Compute Cluster

    Setting up a multi-arch compute cluster manually, not using automation, you’ll want to follow this process:

    1. Setup the Initial Cluster with the multi payload on Intel or Power for the Control Plane.
    2. Open the network ports between the two environments

    ICMP/TCP/UDP flowing in both directions

    1. Configure the Cluster

    a. Change any MTU between the networks

    oc patch Network.operator.openshift.io cluster --type=merge --patch \
        '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 1350 } , "machine": { "to" : 9100} } } } }'
    

    b. Limit CSI drivers to a single Arch

    oc annotate --kubeconfig /root/.kube/config ns openshift-cluster-csi-drivers \
      scheduler.alpha.kubernetes.io/node-selector=kubernetes.io/arch=amd64
    

    c. Disable offloading (I do this in the ignition)

    d. Move the imagepruner jobs to the architecture that makes the most sense

    oc patch imagepruner/cluster -p '{ "spec" : {"nodeSelector": {"kubernetes.io/arch" : "amd64"}}}' --type merge
    

    e. Move the ingress operator pods to the arch that makes the most sense. If you want the ingress pods to be on Intel then patch the clsuter.

    oc edit IngressController default -n openshift-ingress-operator
    

    Change ingresscontroller.spec.nodePlacement.nodeSelector to use the kubernetes.io/arch: amd64 to move the workfload to Intel only.

    f. use routing via host

    oc patch network.operator/cluster --type merge -p \
      '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"routingViaHost":true}}}}}'
    

    Wait until the MCP is finished updating and has the latest MTU

    g. Download the igntion file and host on the local network via http.

    1. Create a new VSI worker and point to the ignition in userdata
    {
        "ignition": {
            "version": "3.4.0",
            "config": {
                "merge": [
                    {
                        "source": "http://${ignition_ip}:8080/ignition/worker.ign"
                    }
                ]
            }
        },
        "storage": {
            "files": [
                {
                    "group": {},
                    "path": "/etc/hostname",
                    "user": {},
                    "contents": {
                        "source": "data:text/plain;base64,${name}",
                        "verification": {}
                    },
                    "mode": 420
                },
                {
                    "group": {},
                    "path": "/etc/NetworkManager/dispatcher.d/20-ethtool",
                    "user": {},
                    "contents": {
                        "source": "data:text/plain;base64,aWYgWyAiJDEiID0gImVudjIiIF0gJiYgWyAiJDIiID0gInVwIiBdCnRoZW4KICBlY2hvICJUdXJuaW5nIG9mZiB0eC1jaGVja3N1bW1pbmciCiAgL3NiaW4vZXRodG9vbCAtLW9mZmxvYWQgZW52MiB0eC1jaGVja3N1bW1pbmcgb2ZmCmVsc2UgCiAgZWNobyAibm90IHJ1bm5pbmcgdHgtY2hlY2tzdW1taW5nIG9mZiIKZmkKaWYgc3lzdGVtY3RsIGlzLWZhaWxlZCBOZXR3b3JrTWFuYWdlci13YWl0LW9ubGluZQp0aGVuCnN5c3RlbWN0bCByZXN0YXJ0IE5ldHdvcmtNYW5hZ2VyLXdhaXQtb25saW5lCmZpCg==",
                        "verification": {}
                    },
                    "mode": 420
                }
            ]
        }
    }
    

    ${name} is base64 encoded.

    1. Post configuration tasks

    a. Configure shared storage using the nfs provisioner and limit to running from the architecture that is hosting the NFS shared volumes.

    b. Approve the CSRs for the workers. Do this carefully as it’s possible to lose the count as it may include Machine updates/csrs.

    1. Check the cluster operators and nodes it should be up and working.
  • Multi-Architecture Compute: Managing User Provisioned Infrastructure Load Balancers with Post-Installation workers

    From https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/03/21/multi-architecture-compute-managing-user-provision?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Multi-Arch Compute for Red Hat OpenShift Container Platform on IBM Power systems lets one use a pair of compute architectures, such as, ppc64le and amd64, within a single cluster. This feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures. The cluster owner is able to add an additional worker post installation.

    With User Provisioned Infrastructure (UPI), the cluster owner may have used automation or manual setup of front-end load balancers. The IBM team provides PowerVS ocp4-upi-powervs, PowerVM ocp4-upi-powervm and HMC ocp4-upi-powervm-hmc automation.

    When installing a cluster, the cluster is setup with ab external load balancer, such as haproxy. The external load balancer routes traffic to pools the Ingress Pods, API Server and MachineConfig server. The haproxy configuration is stored at /etc/haproxy/haproxy.cfg.

    For instance, the configuration for ingress-https load balancer would look like the following:

    frontend ingress-https
            bind *:443
            default_backend ingress-https
            mode tcp
            option tcplog
    
    backend ingress-https
            balance source
            mode tcp
            server master0 10.17.15.11:443 check
            server master1 10.17.19.70:443 check
            server master2 10.17.22.204:443 check
            server worker0 10.17.26.89:443 check
            server worker1 10.17.30.71:443 check
            server worker2 10.17.30.225:443 check
    

    When adding a post-installation worker to a UPI cluster, one must update the ingress-http and ingress-https. Y

    1. Get the IP and hostname
    # oc get nodes -lkubernetes.io/arch=amd64 --no-headers=true -ojson | jq  -c '.items[].status.addresses'
    [{"address":"10.17.15.11","type":"InternalIP"},{"address":"worker-amd64-0","type":"Hostname"}]
    [{"address":"10.17.19.70","type":"InternalIP"},{"address":"worker-amd64-1","type":"Hostname"}]
    
    1. Edit the /etc/haproxy/haproxy.cfg

    a. Find backend ingress-http then before the first server entry add the worker hostnames and ips.

            server worker-amd64-0 10.17.15.11:80 check
            server worker-amd64-1 10.17.19.70:80 check
    

    b. Find backend ingress-https then before the first server entry add the worker hostnames and ips.

            server worker-amd64-0 10.17.15.11:443 check
            server worker-amd64-1 10.17.19.70:443 check
    

    c. Save the config file.

    1. Restart the haproxy
    # systemctl restart haproxy
    

    You now have the additional workers incorporated into the haproxy, and as the ingress pods are moved from Power to Intel and back. You have a fully functional environment.

    Best wishes.

    Paul

    P.S. You can learn more about scalling up the ingress controller at Scaling an Ingress Controller

    $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge
    

    P.P.S If you are running very advanced scenarios, you can change the ingresscontroller spec.nodePlacement.nodeSelector to put the workload on specific architectures. see Configuring an Ingress Controller

    nodePlacement:
     nodeSelector:
       matchLabels:
         kubernetes.io/arch: ppc64le
  • OpenShift 4.15

    IBM announced the availability of Red Hat OpenShift 4.15 available on IBM Power. Read more about it in
    https://community.ibm.com/community/user/powerdeveloper/blogs/brandon-pederson1/2024/03/15/red-hat-openshift-415-now-available-on-ibm-power

    I worked on the following:

    Red Hat OpenShift 4.14, Multi-Architecture Compute was introduced for the IBM Power and IBM Z platforms, enabling a single heterogeneous cluster across different compute architectures. With the release of Red Hat OpenShift 4.15, clients can now add x86 compute nodes to a multi-architecture enabled cluster running on Power. This simplifies deployment across different environments even further and provides a more consistent management experience. Clients are accelerating their modernization journeys with multi-architecture compute and Red Hat OpenShift by exploiting the best-fit architecture for different solutions and reducing cost and complexity of workloads that require multiple compute architectures.

  • A couple IBM Power related updates

    A couple quick updates…

    opentofus – a terraform Compatible Build for ppc64le

    The Oregon State University Open Source Lab (OSU OSL) provides Power servers to develop and test open source projects on the Power Architecture platform. OSU OSL provides ppc64le VMs and bare metal machines as well as CI. Read more about their Power services here.

    You can download the latest version of OpenTofu for ppc64le here. A pull request for a documentation update has now merged. View the official OpenTofu documentation here.

    https://community.ibm.com/community/user/powerdeveloper/blogs/mick-tarsel/2024/03/04/opentofu-openshift-ppc64le

    Cost Management for OpenShift is a SaaS offering that provides users cost visibility across their hybrid cloud environments. The Cost Management Operator obtains OpenShift usage data by querying Prometheus every hour to create usage reports which is then uploaded to Cost Management at console.redhat.com to be processed and viewed.

    Red Hat Cost Management is now available on IBM Power with the latest release version 3.2

     https://community.ibm.com/community/user/powerdeveloper/blogs/jason-cho2/2024/03/04/red-hat-cost-management-on-ibm-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    FYI: Chandan posted Multi-Architecture Compute: Supporting Architecture Specific Operating System and Kernel Parameters https://community.ibm.com/community/user/powerdeveloper/blogs/chandan-abhyankar/2024/03/06/multi-architecture-compute-supporting-architecture

  • Getting started with Multi-Arch Compute workloads with your Red Hat OpenShift cluster

    FYI: Webinar: Getting started with Multi-Arch Compute workloads with your Red Hat OpenShift cluster

    Summary



    The Red Hat OpenShift Container Platform runs on IBM Power systems, offering a secure and reliable foundation for modernizing applications and running containerized workloads.

Multi-Arch Compute for OpenShift Container Platform lets you use a pair of compute architectures such as, ppc64le and amd64, within a single cluster. This exciting feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures.

Join Paul Bastide,  IBM Senior Software Engineer, as he introduces the background behind Multi-Arch Compute and then gets you started setting up, configuring, and scheduling workloads. After, Paul will take you through a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.

This presentation sets the background and gets you started so you can set up, configure, and scheduling workloads. There will be a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.

    Please join me on 11 April 2024, 9:00 AM ET. Please share any questions by clicking on the Reply button. If you have not done so already, register here and download it to your calendar.