Category: OpenShift

  • cert-manager Operator for Red Hat OpenShift v1.13

    The IBM Power development team is happy to introduce cert-manager Operator for Red Hat OpenShift on Power. cert-manager is a “cluster-wide service that provides application certificate lifecycle management”. This service manages certfificates and integration with external certificate authorities using Automated Certificate Management Environment (ACME).

    For v1.13, the release notes also tell you about the expanded support includes multiple architectures – AMD64, IBM Z® (s390x), IBM Power® (ppc64le) and ARM64 architectures.

    This is exciting and I’ll give you a flavor of how to use the cert-manager with your OpenShift cluster. I’ll demonstrate how to use Let’s Encrypt for the HTTP01 challenge type and IBM Cloud Internet Services paired with Let’s Encrypt for the DNS01 challenge type.

    This write up uses a 4.13 cluster on IBM PowerVS using ocp4-upi-powervs, the same steps apply to 4.14 and on-premises environments. To facilitate the HTTP01 challenge type, the IBM Cloud Services section is used:

    ### Using IBM Cloud Services
    use_ibm_cloud_services     = true
    ibm_cloud_vpc_name         = "rdr-cert-manager-vpc"
    ibm_cloud_vpc_subnet_name  = "sn-20231206-01"
    ibm_cloud_resource_group = "resource-group"
    iaas_vpc_region           = "au-syd"               # if empty, will default to ibmcloud_region.
    ibm_cloud_cis_crn         = "crn:v1:bluemix:public:internet-svcs:global:a/<account_id>:<cis_instance_id>::"
    ibm_cloud_tgw             = "rdr-sec-certman"  # Name of existing Transit Gateway where VPC and PowerVS targets are already added.
    

    This means you would have a CIS instance setup with a real domain linked. You would configure the IBM Cloud VPC to connect to the PowerVS workspace over a Transit Gateway. Ideally the connection uses the PER networking feature of PowerVS. This sets up a real hostname for the call back from Lets Encrypt and configures the Load Balancers which support port 80/443 traffic.

    To setup the cert-manager, login to the Web Console as an administrator.

    1. Click on Operators > OperatorHub
    2. Filter on cert-manager
    3. Select cert-manager for Red Hat OpenShift
    4. Click Install using the namespace provided
    5. Wait a few minutes for it to install.

    You now have a working cert-manager operator, and ready for the HTTP01 challenge type. For this, we switch to the commandline.

    1. Login via the commandline as a cluster-admin.
    2. Setup the letsencrypt-http01 Issuer
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: letsencrypt-http01
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: letsencrypt-staging
        solvers:
        - http01:
            ingress:
              class: openshift-default
    EOF
    

    Note, the above is a production letsencrypt, you could use staging. Be carefully how many certificates you create and what service you use, as there may be some rate limiting applied.

    1. Let’s create a certificate for my cluster which is hosted.
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-test-http01
    spec:
      dnsNames:
      - testa.$(oc project --short).apps.cm-4a41.domain.name
      issuerRef:
        name: letsencrypt-http01
      secretName: cert-test-http01b-sec
    EOF
    
    1. We can check the process using oc:
    #  oc get certificate,certificaterequest,order
    NAME                                         READY SECRET                AGE
    certificate.cert-manager.io/cert-test-http01 True  cert-test-dns01-b-sec 48m
    
    NAME                                                APPROVED DENIED READY ISSUER                                 REQUESTOR                                         AGE
    certificaterequest.cert-manager.io/cert-test-http01 True            True  letsencrypt-prody                      system:serviceaccount:cert-manager:cert-manager   25m
    
    NAME                                                     STATE   AGE
    order.acme.cert-manager.io/cert-test-http01-3937192702   valid   25m
    

    Once the order switches from Pending to valid, your certificate is now available in the secret.

    1. Get the certificate usinig the oc. You can also mount the secret or use the secret for the route
    oc get secret cert-test-http01b-sec -oyaml
    

    If you don’t have direct access to the internet, or the HTTP01 is not an option, you can use the cert-manager-webhook-ibmcis.

    1. Clone the repository git clone https://github.com/IBM/cert-manager-webhook-ibmcis.git
    2. Change to the directory cd cert-manager-webhook-ibmcis
    3. Create the webhook project oc new-project cert-manager-webhook-ibmcis
    4. Update the pod-security labels:
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/enforce=privileged --overwrite=true
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/audit=privileged --overwrite=true
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/warn=privileged --overwrite=true
    
    1. Create the ibmcis deployment oc apply -f cert-manager-webhook-ibmcis.yaml
    2. Once the pods are available and ready in the cert-manager-webhook-ibmcis, then we can proceed.
    3. Create the api-token. It is recommended you use a service id with specific access to your CIS instance.
    oc create secret generic ibmcis-credentials --from-literal=api-token="<YOUR API KEY>" 
    
    1. Retreive your CRN using the ibmcloud cli, and save the ID
    ❯ ibmcloud cis instances
    Retrieving service instances for service 'internet-svcs'
    OK
    Name                      ID                Location   State    Service Name
    mycis       crn:v1:bluemix:public:internet-svcs:global:a/<ACCOUNT_NUM>:<INSTANCE_ID>::   global     active   internet-svcs
    
    1. Create the ClusterIssuer, updating YOUR_EMAIL and the CIS ID.
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prody
    spec:
      acme:
        # The ACME server URL
        server: https://acme-v02.api.letsencrypt.org/directory
    
        # Email address used for ACME registration
        email: <YOUR_EMAIL>
    
        # Name of a secret used to store the ACME account private key
        privateKeySecretRef:
          name: letsencrypt-prod
    
        solvers:
        - dns01:
            webhook:
              groupName: acme.borup.work
              solverName: ibmcis
              config:
                apiKeySecretRef:
                  name: ibmcis-credentials
                  key: api-token
                cisCRN: 
                  - "crn:v1:bluemix:public:internet-svcs:global:a/<ACCOUNT_NUM>:<INSTANCE_ID>::"
    EOF
    
    1. Create the DNS01 Certificate
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-test-dns01-b
      namespace: cert-manager-webhook-ibmcis
    spec:
      commonName: "ts-a.cm-4a41.domain.name"
      dnsNames:
      - "ts-a.cm-4a41.domain.name"
      issuerRef:
        name: letsencrypt-prody
        kind: ClusterIssuer
      secretName: cert-test-dns01
    EOF
    
    1. Wait until your certificate is READY=True
    # oc get certificate
    NAME                                      READY   SECRET                                    AGE
    cert-test-dns01-b                         True    cert-test-dns01-b-sec                     75m
    

    You’ve seen how to use both challenge types CIS, Lets Encrypt, and are ready to go.

    Best wishes,

    The Dev Team

  • Multi-Arch Compute Node Selector

    Originally posted to Node Selector https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/09/multi-arch-compute-node-selector?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    The OpenShift Container Platform Multi-Arch Compute feature supports the pair of processor (ISA) architectures – ppc64le and amd64 in a cluster. With these pairs, there are various permutations when scheduling Pods. Fortunately, the platform has controls on where the work is scheduled in the cluster. One of these controls is called the node selector. This article outlines how to go about using Node Selectors at different levels – Pod, Project/Namespace, Cluster.

    Pod Level

    Per OpenShift 4.14: Placing pods on specific nodes using node selectors, a node selector is a map of key/value pairs to determine where the work is scheduled. The Pod nodeSelector values must be the same as the labels of a Node to be eligible for scheduling. If you need more advanced boolean logic, you may use affinity and antiaffinity rules. See Kubernetes: Affinity and anti-affinity

    Consider the Pod definition for test, the nodeSelector has x: y and is matched with a Node which is labeled with .metadata.labels

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        x: y
    

    You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. To direct a Pod to a Power node, you could use the kubernetes.io/arch: ppc64le label.

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        kubernetes.io/arch: ppc64le
    

    You can see where the Pod is scheduled using oc get pods -owide.

    ❯ oc get pods -owide
    NAME READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE   READINESS GATES
    test 1/1     Running   0          24d   10.130.2.9    mac-acca-worker-1 <none>           <none>
    

    You can confirm the architecture for each node oc get nodes mac-acca-worker-1 -owide. You’ll then see the uname is marked with ppc64le

    ❯ oc get nodes mac-acca-worker-1 -owide
    NAME                STATUS   ROLES    AGE   VERSION           INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                  CONTAINER-RUNTIME
    mac-acca-worker-1   Ready    worker   25d   v1.28.3+20a5764   192.168.200.11   <none>        Red Hat Enterprise Linux CoreOS 414.92.....   5.14.0-284.41.1.el9_2.ppc64le   cri-o://1.28.2-2.rhaos4.14.gite7be4e1.el9
    

    This approach applies to high-level Kubernetes abstractions such as ReplicaSets, Deployments or DaemonSets.

    Project / Namespace Level

    Per OpenShift 4.14: Creating project-wide node selectors, the control of Pod creation may not be available in the Project or Namespace. This behavior leaves the customer without control over Pod placement. Kubernetes and OpenShift provide control over Pod placement when the control over the Pod definition is not possible.

    Kubernetes enables this feature through the Namespace annotation scheduler.alpha.kubernetes.io/node-selector. You can read more about internal-behavior.

    You can annotate the namespace:

    oc annotate ns example scheduler.alpha.kubernetes.io/node-selector=kubernetes.io/arch=ppc64le
    

    OpenShift enables this feature through Namespace annotation.

    oc annotate ns example openshift.io/node-selector=kubernetes.io/arch=ppc64le
    

    These direct the Pod to the right node architecture.

    Cluster Level

    Per OpenShift 4.14: Creating default cluster-wide node selectors, the control of Pod creation may not be available or there is a need for a default. This customer controls Pod placement through a default setting of the cluster-wide default node selector.

    To configure the cluster-wide default, patch the Scheduler Operator custom resource (CR).

    oc patch Scheduler cluster --type=merge --patch '{"spec": { "defaultNodeSelector": "kubernetes.io/arch=ppc64le" } }'
    

    To direct scheduling to the other pair of architectures, you MUST define a nodeSelector to override the behavior.

    Summary

    You have seen how to control the distribution of work and how to schedule work with multiple architectures.

    In a future blog, I’ll cover Multiarch Manager Operator source which aims to aims to address problems and usability issues encountered when working with Openshift clusters with multi-architecture compute nodes.

  • Multi Arch Compute OpenShift Container Platform (OCP) cluster on IBM Power 

    Following the release of Red Hat OpenShift 4.14, clients can run x86 and IBM Power Worker Nodes in the same OpenShift Container Platform Cluster with Multi-Architecture Compute. A study compared the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster with a cluster exclusively built on IBM Power architecture. Findings revealed that performance had no significant impact with or without Multi Arch Compute. Click here to learn more about the study and the results found. 

    Watch the Red Hat OpenShift Multi-Arch Introduction Video to learn how, why, and when to add Power to your x86 OpenShift cluster.   

    Watch the OpenShift Multi-Arch Sock Shop Demonstration Video deploying the open-source Sock Shop e-commerce solution using a mix of x86 and Power Worker Nodes with Red Hat OpenShift Multi-Arch to further your understanding. 

  • Awesome Notes – 11/28

    Here are some great resources for OpenShift Container Platform on Power:

    UKI Brunch & Learn – Red Hat OpenShift – Multi-Architecture Compute

    Glad to see the Multiarchitecture Compute with an Intel Control Plane and Power worker in all its glory. Thanks to Paul Chapman

    https://www.linkedin.com/posts/chapmanp_uki-brunch-learn-red-hat-openshift-activity-7133370146890375168-AmuL?utm_source=share&utm_medium=member_desktop

    Explore Multi Arch Compute in OpenShift cluster with IBM Power systems

    In the ever-evolving landscape of computing, the quest for optimal performance and adaptability remains constant. This study delves into the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster, comparing it with a cluster exclusively built on IBM Power architecture. Our findings reveal that, with or without Multi Arch Compute, there is no significant impact on performance.

    Thanks to @Mel from the IBM Power Systems Performance Team

    https://community.ibm.com/community/user/powerdeveloper/blogs/mel-bakhshi/2023/11/28/explore-mac-ocp-on-power

    Enabling FIPS Compliance in Openshift Cluster Platform on Power

    A new PDEX blog is posted to help the technical experts configure their OpenShift Container Platform on Power and the necessary background to configure FIPS 140-2 compliance.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/enabling-fips-compliance-in-openshift-cluster-plat?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting etcd data on OpenShift Container Platform on Power

    This article was originally posted to Medium by Gaurav Bankar and has been updated.

    And now is posted with updated details for 4.14.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-etcd-data-on-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Using TLS Security Profiles on OpenShift Container Platform on IBM Power

    This article identifies using cluster operators and components with TLS Security profiles, covers the available security profiles, and how to configure each profile, and verify each profile is properly enabled.

    https://community.ibm.com/community/user/powerdeveloper/communities/community-home/recent-community-blogs?communitykey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting disks on OpenShift Container Platform on Power Systems

    This document outlines the concepts, how to setup an external tang cluster on IBM PowerVS, how to setup a cluster on IBM PowerVS and how to confirm the encrypted disk setup.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-disks-on-openshift-container-platform-o?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Configuring a PCI-DSS compliant OpenShift Container Platform cluster on IBM Power

    This article outlines how to verify the profiles, check for the scan results, and configure a compliant cluster.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/configuring-a-pci-dss-compliant-openshift-containe?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Open Source Container images for Power now available in IBM Container Registry

    The OpenSource team has posted new images:

    grafana-mimir-build-image2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-build-image-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-continuous-test2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-continuous-test-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-rules-action2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-rules-action-ppc64le:2.9.0Nov 24, 2023
    grafana-mimirtool2.9.0docker pull icr.io/ppc64le-oss/grafana-mimirtool-ppc64le:2.9.0Nov 24, 2023
    grafana-query-tee2.9.0docker pull icr.io/ppc64le-oss/grafana-query-tee-ppc64le:2.9.0Nov 24, 2023
    filebrowserv2.24.2docker pull icr.io/ppc64le-oss/filebrowser-ppc64le:v2.24.2Nov 24, 2023
    neo4j5.9.0docker pull icr.io/ppc64le-oss/neo4j-ppc64le:5.9.0Nov 24, 2023
    kong3.3.0docker pull icr.io/ppc64le-oss/kong-ppc64le:3.3.0Nov 24, 2023
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Multi-arch build pipelines for Power: Automating multi-arch image builds

    Multi-arch build pipelines can greatly reduce the complexity of supporting multiple operating systems and architectures. Notably, images built on the Power architecture can seamlessly be supported by other architectures, and vice versa, amplifying the versatility and impact of your applications. Furthermore, automating the processes using various CI tools, not only accelerates the creation of multi-arch images but also ensures consistency, reliability, and ease of integration into diverse software environments.

    Building on our exploration of multi-arch pipelines for IBM Power in the first blog, this blog delves into the next frontier: Automation. Automating multi-arch image builds using Continuous Integration (CI) tools has become essential in modern software development. This process allows developers to efficiently create and maintain container images that can run on various CPU architectures, such as IBM Power (ppc64le), x86 (amd64), or ARM ensuring compatibility across diverse hardware environments.

    Part 1 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/multi-arch-pipelines-for-ibm-power Part 2 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/automating-multi-arch-image-builds-for-power

  • Quay.io now available on IBM Power Systems

    Thanks to the RH and Power Team and Yussuf in particular – IBM Power now has quay.io install-run support.

    Red Hat Quay is a distributed, highly available, security-focused, and scalable private image registry platform that enables you to build, organize, distribute, and deploy containers for your enterprise. It provides a single and resilient content repository for delivering containerized software to development and production across Red Hat OpenShift and Kubernetes clusters.

    Now, Red Hat Quay is available on IBM Power with version 3.10. Read the official Red Hat Quay 3.10 blog and for more information visit the Red Hat Quay Documentation page.

    https://community.ibm.com/community/user/powerdeveloper/blogs/yussuf-shaikh/2023/11/07/quay-on-power
  • Useful Notes for September and October 2023

    Hi everyone, I’ve been heads down working on Multiarchitecture Compute and the Power platform for IBM.

    How to add /etc/hosts file entries in OpenShift containers

    You can add host aliases into the Pod Definition which is handy if the code is hard coded with a DNS entry.

          hostAliases:
          - ip: "127.0.0.1"
            hostnames:
            - "home"
         - ip: "10.1.x.x"
            hostnames:
            - "remote-host"
    https://access.redhat.com/solutions/3696301

    Infrastructure Nodes in OpenShift 4

    A link to Infra nodes which provide a specific role in the cluster.

    https://access.redhat.com/solutions/5034771

    Multiarchitecture Compute Research

    Calling all IBM Power customers looking to impact Power modernization capabilities. The IBM Power Design Team is facilitating a study to understand customer sentiment toward Multi-Architecture Computing (MAC) and needs your help.

    https://community.ibm.com/community/user/powerdeveloper/blogs/erica-albert/2023/10/11/multi-architecture-computing-research-recruit 

    This is an interesting opportunity to work with customers on IBM Power and OpenShift as they mix the architecture workloads to meet their needs.

  • Weekly Notes

    Here are my weekly notes:

    Flow Connector

    If you are using the VPC, you can track connections between your subnets and your VPC using Flow Connector.

    ❯ find . -name “*.gz” -exec gunzip {} \;

    ❯ grep -Rh 192.168.200.10 | jq -r ‘.flow_logs[] | select(.action == “rejected”) | “\(.initiator_ip),\(.target_ip),\(.target_port)”‘ | sort -u | grep 192.168.200.10

    10.245.0.5,192.168.200.10,36416,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,36430,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,58894,2023-08-08T14:31:32Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:41Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.1.5,192.168.200.10,9100,2023-08-08T14:31:32Z

    10.245.129.4,192.168.200.10,43524,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9100,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9537,2023-08-08T14:50:36Z

    Image Pruner Reports Error….

    You can check the image-registry status on the cluster operator.

    ❯ oc get co image-registry
    image-registry                             4.14.0-ec.4   True        False         True       3d14h   ImagePrunerDegraded: Job has reached the specified backoff limit
    

    The cronjob probably failed, so we can check that it exists.

    ❯ oc get cronjob -n openshift-image-registry
    NAME           SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    image-pruner   0 0 * * *   False     0        16h             3d15h
    

    We can run a one-off to clear the status above.

    ❯ oc create job --from=cronjob/image-pruner one-off-image-pruner -n openshift-image-registry
    job.batch/one-off-image-pruner created
    

    Then your image-registry should be a-ok.

    Ref: https://gist.github.com/ryderdamen/73ff9f93cd61d5dd45a0c50032e3ae03

  • Protected: Webinar: Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers

    This content is password protected. To view it please enter your password below:

  • Krew plugin on ppc64le

    Posted to https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/19/kubernetes-krew-plugin-support-for-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Hey everyone,

    Krew, the kubectl plugin package manager, is now available on Power. The release v0.4.4 has a ppc64le download. You can download and start taking advantage of the krew plugin list. ppc64le download It also works with OpenShift.

    The Krew website has a list of plugins](https://krew.sigs.k8s.io/plugins/). Not all of the plugins support ppc64le, however may are cross-arch scripts and are cross compiled such as view-utilization.

    To take advantage of Krew with OpenShift, here are a few steps

    1. Download the krew-linux plugin
    # curl -L -O https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100 3977k  100 3977k    0     0  6333k      0 --:--:-- --:--:-- --:--:-- 30.5M
    
    1. Extract the krew plugin
    tar xvf krew-linux_ppc64le.tar.gz 
    ./LICENSE
    ./krew-linux_ppc64le
    
    1. Move to the /usr/bin so it’s picked up by oc.
    mv krew-linux_ppc64le /usr/bin/kubectl-krew
    
    1. Update the krew plugin
    # kubectl krew update
    WARNING: To be able to run kubectl plugins, you need to add
    the following to your ~/.bash_profile or ~/.bashrc:
    
        export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
    
    and restart your shell.
    
    Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git.
    Updated the local copy of plugin index.
    
    1. Update your shell:
    # echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
    
    1. Restart your session (exit and come back to the shell so the variables are loaded)
    2. Try oc krew list
    # oc krew list
    PLUGIN  VERSION
    
    1. List all the plugins that support ppc64le.
    # oc krew search  | grep -v 'unavailable on linux/ppc64le'
    NAME                            DESCRIPTION                                         INSTALLED
    allctx                          Run commands on contexts in your kubeconfig         no
    assert                          Assert Kubernetes resources                         no
    bulk-action                     Do bulk actions on Kubernetes resources.            no
    ...
    tmux-exec                       An exec multiplexer using Tmux                      no
    view-utilization                Shows cluster cpu and memory utilization            no
    
    1. Install a plugin
    # oc krew install view-utilization
    Updated the local copy of plugin index.
    Installing plugin: view-utilization
    Installed plugin: view-utilization
    \
     | Use this plugin:
     |      kubectl view-utilization
     | Documentation:
     |      https://github.com/etopeter/kubectl-view-utilization
     | Caveats:
     | \
     |  | This plugin needs the following programs:
     |  | * bash
     |  | * awk (gawk,mawk,awk)
     | /
    /
    WARNING: You installed plugin "view-utilization" from the krew-index plugin repository.
       These plugins are not audited for security by the Krew maintainers.
       Run them at your own risk.
    
    1. Use the plugin.
    # oc view-utilization
    Resource     Requests  %Requests      Limits  %Limits  Allocatable  Schedulable         Free
    CPU              7521         16        2400        5        45000        37479        37479
    Memory    33477885952         36  3774873600        4  92931489792  59453603840  59453603840
    

    Tip: There are many more plugins with ppc64le support and do not have the krew manifest updated.

    Thanks to PR 755 we have support for ppc64le.

    References

    https://github.com/kubernetes-sigs/krew/blob/v0.4.4/README.md

    https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz

  • Notes from the Week

    A few things I learned this week are:

    There is a cool session on IPI PowerVS for OpenShift called Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers Webinar.

    Did you know that you can run Red Hat OpenShift clusters on IBM Power servers? Maybe you do, but you don’t have Power hardware to try it out on, or you don’t have time to learn about OpenShift using the User-Provisioned method of installation. Let us introduce you to the Installer-Provisioned Installation method for OpenShift clusters, also called “an IPI install” on IBM Power Virtual Servers. IPI installs are much simpler than UPI installs, because, the installer itself has built-in logic that can provision each and every component your cluster needs.

    Join us on 27 July at 10 AM ET for this 1-hour live webinar to learn why the benefits of the IPI installation method goes well beyond installation and into the cluster lifecycle. We’ll show you how to deploy OpenShift IPI on Power Virtual Server with a live demo. And finally, we’ll share some ways that you can try it yourself. Please share any questions by clicking on the Reply button. If you have not done so already, register to join here and get your calendar invite.

    https://community.ibm.com/community/user/powerdeveloper/discussion/introducing-red-hat-openshift-installer-provisioned-installation-ipi-for-ibm-power-virtual-servers-webinar

    Of all the things, I finally stated using reverse-search in the shell. CTRL+R on the commandline. link or link

    The IBM Power Systems team announced a Tech Preview of Red Hat Ansible Automation Platform on IBM Power

    Continuing our journey to enable our clients’ automation needs, IBM is excited to announce the Technical Preview of Ansible Automation Platform running on IBM Power! Now, in addition to automating against IBM Power endpoints (e.g., AIX, IBM i, etc.), clients will be able to run Ansible Automation Platform components on IBM Power. In addition to IBM Power support for Ansible Automation Platform, Red Hat is also providing support for Ansible running on IBM Z Systems. Now, let’s dive into the specifics of what this entails.

    My team released a new version of the PowerVM Tang Server Automation to fix a routing problem:

    The powervm-tang-server-automation project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management.

    https://github.com/IBM/powervm-tang-server-automation/tree/v1.0.1

    The RH Ansible/IBM teams have released Red Hat Ansible Lightspeed with IBM Watson Code I’m excited to try it out and expand my Ansible usage.

    It’s great to see the Kernel Module Manager release 1.1 with Day-1 support through KMM

    The Kernel Module Management Operator manages out of tree kernel modules in Kubernetes.

    https://github.com/kubernetes-sigs/kernel-module-management