Category: OpenShift

  • Weekly Notes

    Here are my weekly notes:

    Flow Connector

    If you are using the VPC, you can track connections between your subnets and your VPC using Flow Connector.

    ❯ find . -name “*.gz” -exec gunzip {} \;

    ❯ grep -Rh 192.168.200.10 | jq -r ‘.flow_logs[] | select(.action == “rejected”) | “\(.initiator_ip),\(.target_ip),\(.target_port)”‘ | sort -u | grep 192.168.200.10

    10.245.0.5,192.168.200.10,36416,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,36430,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,58894,2023-08-08T14:31:32Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:41Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.1.5,192.168.200.10,9100,2023-08-08T14:31:32Z

    10.245.129.4,192.168.200.10,43524,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9100,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9537,2023-08-08T14:50:36Z

    Image Pruner Reports Error….

    You can check the image-registry status on the cluster operator.

    ❯ oc get co image-registry
    image-registry                             4.14.0-ec.4   True        False         True       3d14h   ImagePrunerDegraded: Job has reached the specified backoff limit
    

    The cronjob probably failed, so we can check that it exists.

    ❯ oc get cronjob -n openshift-image-registry
    NAME           SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    image-pruner   0 0 * * *   False     0        16h             3d15h
    

    We can run a one-off to clear the status above.

    ❯ oc create job --from=cronjob/image-pruner one-off-image-pruner -n openshift-image-registry
    job.batch/one-off-image-pruner created
    

    Then your image-registry should be a-ok.

    Ref: https://gist.github.com/ryderdamen/73ff9f93cd61d5dd45a0c50032e3ae03

  • Protected: Webinar: Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers

    This content is password protected. To view it please enter your password below:

  • Krew plugin on ppc64le

    Posted to https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/19/kubernetes-krew-plugin-support-for-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Hey everyone,

    Krew, the kubectl plugin package manager, is now available on Power. The release v0.4.4 has a ppc64le download. You can download and start taking advantage of the krew plugin list. ppc64le download It also works with OpenShift.

    The Krew website has a list of plugins](https://krew.sigs.k8s.io/plugins/). Not all of the plugins support ppc64le, however may are cross-arch scripts and are cross compiled such as view-utilization.

    To take advantage of Krew with OpenShift, here are a few steps

    1. Download the krew-linux plugin
    # curl -L -O https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100 3977k  100 3977k    0     0  6333k      0 --:--:-- --:--:-- --:--:-- 30.5M
    
    1. Extract the krew plugin
    tar xvf krew-linux_ppc64le.tar.gz 
    ./LICENSE
    ./krew-linux_ppc64le
    
    1. Move to the /usr/bin so it’s picked up by oc.
    mv krew-linux_ppc64le /usr/bin/kubectl-krew
    
    1. Update the krew plugin
    # kubectl krew update
    WARNING: To be able to run kubectl plugins, you need to add
    the following to your ~/.bash_profile or ~/.bashrc:
    
        export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
    
    and restart your shell.
    
    Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git.
    Updated the local copy of plugin index.
    
    1. Update your shell:
    # echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
    
    1. Restart your session (exit and come back to the shell so the variables are loaded)
    2. Try oc krew list
    # oc krew list
    PLUGIN  VERSION
    
    1. List all the plugins that support ppc64le.
    # oc krew search  | grep -v 'unavailable on linux/ppc64le'
    NAME                            DESCRIPTION                                         INSTALLED
    allctx                          Run commands on contexts in your kubeconfig         no
    assert                          Assert Kubernetes resources                         no
    bulk-action                     Do bulk actions on Kubernetes resources.            no
    ...
    tmux-exec                       An exec multiplexer using Tmux                      no
    view-utilization                Shows cluster cpu and memory utilization            no
    
    1. Install a plugin
    # oc krew install view-utilization
    Updated the local copy of plugin index.
    Installing plugin: view-utilization
    Installed plugin: view-utilization
    \
     | Use this plugin:
     |      kubectl view-utilization
     | Documentation:
     |      https://github.com/etopeter/kubectl-view-utilization
     | Caveats:
     | \
     |  | This plugin needs the following programs:
     |  | * bash
     |  | * awk (gawk,mawk,awk)
     | /
    /
    WARNING: You installed plugin "view-utilization" from the krew-index plugin repository.
       These plugins are not audited for security by the Krew maintainers.
       Run them at your own risk.
    
    1. Use the plugin.
    # oc view-utilization
    Resource     Requests  %Requests      Limits  %Limits  Allocatable  Schedulable         Free
    CPU              7521         16        2400        5        45000        37479        37479
    Memory    33477885952         36  3774873600        4  92931489792  59453603840  59453603840
    

    Tip: There are many more plugins with ppc64le support and do not have the krew manifest updated.

    Thanks to PR 755 we have support for ppc64le.

    References

    https://github.com/kubernetes-sigs/krew/blob/v0.4.4/README.md

    https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz

  • Notes from the Week

    A few things I learned this week are:

    There is a cool session on IPI PowerVS for OpenShift called Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers Webinar.

    Did you know that you can run Red Hat OpenShift clusters on IBM Power servers? Maybe you do, but you don’t have Power hardware to try it out on, or you don’t have time to learn about OpenShift using the User-Provisioned method of installation. Let us introduce you to the Installer-Provisioned Installation method for OpenShift clusters, also called “an IPI install” on IBM Power Virtual Servers. IPI installs are much simpler than UPI installs, because, the installer itself has built-in logic that can provision each and every component your cluster needs.

    Join us on 27 July at 10 AM ET for this 1-hour live webinar to learn why the benefits of the IPI installation method goes well beyond installation and into the cluster lifecycle. We’ll show you how to deploy OpenShift IPI on Power Virtual Server with a live demo. And finally, we’ll share some ways that you can try it yourself. Please share any questions by clicking on the Reply button. If you have not done so already, register to join here and get your calendar invite.

    https://community.ibm.com/community/user/powerdeveloper/discussion/introducing-red-hat-openshift-installer-provisioned-installation-ipi-for-ibm-power-virtual-servers-webinar

    Of all the things, I finally stated using reverse-search in the shell. CTRL+R on the commandline. link or link

    The IBM Power Systems team announced a Tech Preview of Red Hat Ansible Automation Platform on IBM Power

    Continuing our journey to enable our clients’ automation needs, IBM is excited to announce the Technical Preview of Ansible Automation Platform running on IBM Power! Now, in addition to automating against IBM Power endpoints (e.g., AIX, IBM i, etc.), clients will be able to run Ansible Automation Platform components on IBM Power. In addition to IBM Power support for Ansible Automation Platform, Red Hat is also providing support for Ansible running on IBM Z Systems. Now, let’s dive into the specifics of what this entails.

    My team released a new version of the PowerVM Tang Server Automation to fix a routing problem:

    The powervm-tang-server-automation project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management.

    https://github.com/IBM/powervm-tang-server-automation/tree/v1.0.1

    The RH Ansible/IBM teams have released Red Hat Ansible Lightspeed with IBM Watson Code I’m excited to try it out and expand my Ansible usage.

    It’s great to see the Kernel Module Manager release 1.1 with Day-1 support through KMM

    The Kernel Module Management Operator manages out of tree kernel modules in Kubernetes.

    https://github.com/kubernetes-sigs/kernel-module-management

  • A few more notes from the week

    A few things I learned about this week are:

    IBM Redbooks: Implementing, Tuning, and Optimizing Workloads with Red Hat OpenShift on IBM Power

    A new document provides hints and tips about how to install your Red Hat OpenShift cluster, and also provide guidance about how to size and tune your environment. I’m reading through it now – and excited.

    Upcoming Webinar: Powering AI Innovation: Exploring IBM Power with MMA and ONNX on Power10 Featuring Real Time Use Cases

    The session is going to explore showcase the impressive capabilities of MMA (Matrix Math Accelerator) on the cutting-edge Power10 architecture.

    CSI Cinder Configuration for a different availability zone

    I had a failed install on OpenStack with Power9 KVM, and I had to redirect the Image Registry to use a different operator. Use the following storage class, you’ll have to change the default and names.

    allowVolumeExpansion: true
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
      name: standard-csi-new
    provisioner: cinder.csi.openstack.org
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    parameters:
      availability: nova
    

    If you need to change the default-class, then:

    oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
    

    TIP: openshift-install router quota check

    FATAL failed to fetch Cluster: failed to fetch dependency of “Cluster”: failed to generate asset “Platform Quota Check”: error(MissingQuota): Router is not available because the required number of resources (1) is more than remaining quota of 0

    Then check the quota for the number of routers. You probably need to remove some old ones.

    # openstack --os-cloud openstack quota show | grep router
    | routers | 15 |
  • Weekly Notes

    Here are my weekly learnings and notes:

    Podman Desktop updates v1.0.1

    Podman Desktop is an open source graphical tool enabling you to seamlessly work with containers and Kubernetes from your local environment.

    In a cool update, the Podman Desktop team added support for OpenShift Local in v1.0.1 and Kind clusters are already there. We can do some advanced stuff. You may have to download extensions and upgrade Podman to v4.5.0.

    ❯ brew upgrade podman-desktop
    ...
    🍺  podman-desktop was successfully upgraded!
    

    Skupper… interesting

    Skupper is a layer 7 service interconnect. It enables secure communication across Kubernetes clusters with no VPNs or special firewall rules.

    There is a new layer-7 interconnect. There is a sample

    Red Hat OpenShift Container Platform 4.13.0 is generally available

    I’ve been working on the product for 4.13.0 – oc new-app and new-build support.

    Podman Cheat Sheet

    Podman Cheat Sheet covers all the basic commands for managing images, containers, and container resources. Super helpful for those stuck finding the right command to build/manage or run your container.

    File Integrity Operator: Using File Integrity Operator to support file integrity checks on OpenShift Container Platform on Power

    My colleague has published a blog on File Integrity Operator.

    As part of this series, I have written a blog on PCI-DSS and the Compliance Operator to have a secure and compliant cluster. Part of the cluster’s security and compliance depends on the File Integrity Operator – an operator that uses intrusion detection rules to verify the integrity of files and directories on cluster’s nodes. 

    https://community.ibm.com/community/user/powerdeveloper/blogs/aditi-jadhav/2023/05/24/using-file-integrity-operator-to-support-file-inte
  • Weekly Notes

    Here are my notes from the week:

    1. Subnet to CIDR block Cheat Sheet
    2. OpenShift Installer Provisioned Infrastructure for IBM Cloud VPC

    rfc1878: Subnet CIDR Cheat Sheet

    I found a great cheat sheet for CIDR subnet masks.

       Mask value:                             # of
       Hex            CIDR   Decimal           addresses  Classfull
       80.00.00.00    /1     128.0.0.0         2048 M     128 A
       C0.00.00.00    /2     192.0.0.0         1024 M      64 A
       E0.00.00.00    /3     224.0.0.0          512 M      32 A
       F0.00.00.00    /4     240.0.0.0          256 M      16 A
       F8.00.00.00    /5     248.0.0.0          128 M       8 A
       FC.00.00.00    /6     252.0.0.0           64 M       4 A
       FE.00.00.00    /7     254.0.0.0           32 M       2 A
       FF.00.00.00    /8     255.0.0.0           16 M       1 A
       FF.80.00.00    /9     255.128.0.0          8 M     128 B
       FF.C0.00.00   /10     255.192.0.0          4 M      64 B
       FF.E0.00.00   /11     255.224.0.0          2 M      32 B
       FF.F0.00.00   /12     255.240.0.0       1024 K      16 B
       FF.F8.00.00   /13     255.248.0.0        512 K       8 B
       FF.FC.00.00   /14     255.252.0.0        256 K       4 B
       FF.FE.00.00   /15     255.254.0.0        128 K       2 B
       FF.FF.00.00   /16     255.255.0.0         64 K       1 B
       FF.FF.80.00   /17     255.255.128.0       32 K     128 C
       FF.FF.C0.00   /18     255.255.192.0       16 K      64 C
       FF.FF.E0.00   /19     255.255.224.0        8 K      32 C
       FF.FF.F0.00   /20     255.255.240.0        4 K      16 C
       FF.FF.F8.00   /21     255.255.248.0        2 K       8 C
       FF.FF.FC.00   /22     255.255.252.0        1 K       4 C
       FF.FF.FE.00   /23     255.255.254.0      512         2 C
       FF.FF.FF.00   /24     255.255.255.0      256         1 C
       FF.FF.FF.80   /25     255.255.255.128    128       1/2 C
       FF.FF.FF.C0   /26     255.255.255.192     64       1/4 C
       FF.FF.FF.E0   /27     255.255.255.224     32       1/8 C
       FF.FF.FF.F0   /28     255.255.255.240     16      1/16 C
       FF.FF.FF.F8   /29     255.255.255.248      8      1/32 C
       FF.FF.FF.FC   /30     255.255.255.252      4      1/64 C
       FF.FF.FF.FE   /31     255.255.255.254      2     1/128 C
       FF.FF.FF.FF   /32     255.255.255.255      1

    Thanks to the following sites for the clue to the rfc and the rfc.

    Mutating WebHook to add Node Selectors

    Thanks to these sites

    1. hmcts/k8s-env-injector provided inspiration for this approach and updates the code patterns for the latest kubernetes versions.
    2. phenixblue/imageswap-webhook provided the python based pattern for this approach.
    3. Kubernetes: MutatingAdmissionWebhook

    I added some code to add annotations and nodeSelectors https://github.com/prb112/openshift-demo/tree/main/mutating

    Installing OpenShift install provisioned infrastructure on IBM Cloud VPC

    This document outlines installing the IPI IBMCloud using the openshift-installer.

    As of OpenShift 4.13, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud VPC. The installation program provisions the required infrastructure, which you can then further customize.

    This document describes the creation of OCP cluster using IPI (Installer Provisioned Infrastructure) on exiting IBM Cloud VPC.

    This setup is used with the day-2 operations on PowerVS to make a multiarch compute cluster.

    1. Create IBM API Key
    2. Create the IAM Services
    3. Pick your build
    4. Deploy

    1. Create IBM API Key

    1. Navigate to API keys iam – api keys
    2. Click Create
    3. Enter name rdr-demo
    4. Click Create
    5. Copy your API key, it’ll be used later on.

    2. Create the IAM Services

    1. Navigate to Service Ids iam – serviceids
    2. click create service id with name rdr-demo to identify your team.
    3. assign access
    Internet Services	All	Viewer, Operator, Editor, Reader, Writer, Manager, Administrator		--	
    	
    Cloud Object Storage	All	Viewer, Operator, Editor, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Administrator		--	
    	
    IAM Identity Service	All	Viewer, Operator, Editor, Administrator, ccoctlPolicy, policycreate		--	
    	
    Resource group only	ocp-dev-resource-group resource group	Viewer, Administrator, Editor, Operator		--	
    	
    VPC Infrastructure Services	All	Viewer, Operator, Editor, Reader, Writer, Administrator, Manager
    

    3. Pick your build

    I used 4.13.0-rc.7.

    4. Deploy

    1. Connect to your jumpserver or bastion where you are doing the deployment.

    Tip: it’s worth having tmux installed for this install (it’ll take about 1h30m)

    1. Export the API KEY you created above
    ❯ export IC_API_KEY=<REDACTED>
    
    1. Create a working folder
    ❯ mkdir -p ipi-vpc-414-rc7
    ❯ cd ipi-vpc-414-rc7
    
    1. Download the installers and extract to the binary folder.
    ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/ccoctl-linux.tar.gz
    ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/openshift-client-linux.tar.gz
    ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/openshift-install-linux.tar.gz
    ❯ tar xvf ccoctl-linux.tar.gz --dir /usr/local/bin/
    ❯ tar xvf openshift-client-linux.tar.gz --dir /usr/local/bin/
    ❯ tar xvf openshift-install-linux.tar.gz --dir /usr/local/bin/
    
    1. Verify the openshift-install version is correct.
    ❯ openshift-install version
    openshift-install 4.13.0-rc.7
    built from commit 3e0b2a2ec26d9ffcca34b361896418499ad9d603
    release image quay.io/openshift-release-dev/ocp-release@sha256:aae5131ec824c301c11d0bf11d81b3996a222be8b49ce4716e9d464229a2f92b
    release architecture amd64
    
    1. Copy over your pull-secret.

    a. Login with your Red Hat id

    b. Navigate to https://console.redhat.com/openshift/install/ibm-cloud 

    c. Scroll down the page and copy the pull-secret.

    This pull-secret should work for you and save for later as pull-secret.txt in the working directory.

    1. Extract the CloudControlsRequest objects and create the credentials.
    RELEASE_IMAGE=$(openshift-install version | awk '/release image/ {print $3}')
    oc adm release extract --cloud=ibmcloud --credentials-requests $RELEASE_IMAGE --to=rdr-demo
    ccoctl ibmcloud create-service-id --credentials-requests-dir rdr-demo --output-dir rdr-demo-out --name rdr-demo --resource-group-name ocp-dev-resource-group
    
    1. Create the install-config
    ❯ openshift-install create install-config --dir rc7_2
    ? SSH Public Key /root/.ssh/id_rsa.pub                                                                     
    ? Platform ibmcloud                                                                                        
    ? Region jp-osa                                                                                            
    ? Base Domain ocp-multiarch.xyz (rdr-multi-is)                                                             
    ? Cluster Name rdr-multi-pb                                                                                
    ? Pull Secret [? for help] ********************************************************************************
    ***********************************
    INFO Manifests created in: rc7_1/manifests and rc7_1/openshift
    
    1. Edit the install-config.yaml to add resourceGroupName
    platform:
      ibmcloud:
        region: jp-osa
        resourceGroupName: my-resource-group 
    
    1. Copy the generated ccoctl manifests over.
    ❯ cp rdr-demo-out/manifests/* rc7_1/manifests/
    
    1. Create the manifests.
    ❯ openshift-install create manifests --dir=rc7_1
    INFO Consuming OpenShift Install (Manifests) from target directory
    INFO Manifests created in: rc7_1/manifests and rc7_1/openshift
    
    1. Create the cluster.
    ❯ openshift-install create cluster --dir=rc7_3
    INFO Consuming Worker Machines from target directory
    INFO Consuming Common Manifests from target directory
    INFO Consuming Openshift Manifests from target directory
    INFO Consuming OpenShift Install (Manifests) from target directory
    INFO Consuming Master Machines from target directoryINFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.13-9.2/builds/413.92.202305021736-0/x86_64/rhcos-413.92.202305021736-0-ibmcloud.x86_64.qcow2.gz?sha256=222abce547c1bbf32723676f4977a3721c8a3788f0b7b6b3496b79999e8c60b3'                                   
    INFO The file was found in cache: /root/.cache/openshift-installer/image_cache/rhcos-413.92.202305021736-0-ibmcloud.x86_64.qcow2. Reusing...           INFO Creating infrastructure resources...
    INFO Waiting up to 20m0s (until 12:09PM) for the Kubernetes API at https://api.xyz.ocp-multiarch.xyz:6443... 
    INFO API v1.26.3+b404935 up                       
    INFO Waiting up to 30m0s (until 12:19PM) for bootstrapping to complete... 
    INFO Destroying the bootstrap resources...        
    INFO Waiting up to 40m0s (until 12:41PM) for the cluster at https://api.xyz.ocp-multiarch.xyz:6443 to initialize... 
    INFO Checking to see if there is a route at openshift-console/console... 
    INFO Install complete!                            
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ipi-vpc-414-rc7/rc7_3/auth/kubeconfig' 
    INFO Access the OpenShift web-console here: 
    INFO Login to the console with user: "kubeadmin", and password: "xxxxxxxxx-wwwwww-xxxx-aas" 
    INFO Time elapsed: 1h28m9s      
    
    1. Verify the cluster

    a. set kubeconfig provided by installation

    export KUBECONFIG=$(pwd)/rc7_1/auth/kubeconfig
    

    b. Check the nodes are Ready

    ❯  oc get nodes
    NAME                                    STATUS   ROLES          AGE     		VERSION
    rdr-multi-ca-rc6-tplwd-master-0             Ready    control-plane,master  5h13m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-master-1             Ready    control-plane,master  5h13m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-master-2             Ready    control-plane,master  5h13m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-worker-1-pfqjx  Ready    worker                 	4h47m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-worker-1-th8j4  Ready    worker                 4h47m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-worker-1-xl75m Ready    worker                 4h53m   v1.26.3+b404935
    

    c. Check Cluster Operators

    ❯ oc get co
    NAME                                       	VERSION       AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    authentication                             	4.13.0-rc.6   True        False         False      4h43m
    baremetal                              	4.13.0-rc.6   True        False         False      5h5m
    cloud-controller-manager            4.13.0-rc.6   True        False         False      5h13m
    cloud-credential                           	4.13.0-rc.6   True        False         False      5h18m
    cluster-autoscaler                         	4.13.0-rc.6   True        False         False      5h5m
    config-operator                      	4.13.0-rc.6   True        False         False      5h7m
    console                           	       	4.13.0-rc.6   True        False         False      4h47m
    control-plane-machine-set     	4.13.0-rc.6   True        False         False      5h5m
    csi-snapshot-controller                 4.13.0-rc.6   True        False         False      4h54m
    dns                                        	4.13.0-rc.6   True        False         False      4h54m
    etcd                                       	4.13.0-rc.6   True        False         False      4h57m
    image-registry                             	4.13.0-rc.6   True        False         False      4h50m
    ingress                                    	4.13.0-rc.6   True        False         False      4h51m
    insights                                   	4.13.0-rc.6   True        False         False      5h
    kube-apiserver                             	4.13.0-rc.6   True        False         False      4h53m
    kube-controller-manager             4.13.0-rc.6   True        False         False      4h53m
    kube-scheduler                             	4.13.0-rc.6   True        False         False      4h52m
    kube-storage-version-migrator   4.13.0-rc.6   True        False         False      4h54m
    machine-api                                	4.13.0-rc.6   True        False         False      4h48m
    machine-approver                         4.13.0-rc.6   True        False         False      5h5m
    machine-config                             	4.13.0-rc.6   True        False         False      5h6m
    marketplace                                	4.13.0-rc.6   True        False         False      5h5m
    monitoring                                 	4.13.0-rc.6   True        False         False      4h45m
    network                                    	4.13.0-rc.6   True        False         False      5h8m
    node-tuning                                	4.13.0-rc.6   True        False         False      4h54m
    openshift-apiserver                       4.13.0-rc.6   True        False         False      4h47m
    openshift-controller-manager     4.13.0-rc.6   True        False         False      4h54m
    openshift-samples                         4.13.0-rc.6   True        False         False      4h50m
    operator-lifecycle-manager         4.13.0-rc.6   True        False         False      5h6m
    operator-lifecycle-manager-catalog         4.13.0-rc.6   True        False         False      5h6m
    operator-lifecycle-manager-packageserver   4.13.0-rc.6   True        False         False      4h51m
    service-ca                                 	4.13.0-rc.6   True        False         False      5h7m
    storage                                    	4.13.0-rc.6   True        False         False      4h51m
    

    Note – Confirm that all master/worker nodes and operators are running healthy and true.

    1. Verify the browser login

    A. Open Browser and Login to Console URL using available credentials. e.g.,

    URL - https://console-openshift-console.apps.xxxxxx.ocp-multiarch.xyz
    	Username – kubeadmin
    	Password - <Generated Password>
    
    1. destroy cluster Fire below mentioned command to destroy cluster by specifying installation directory.
    ❯ ./openshift-install destroy cluster --dir  ocp413-rc6 --log-level=debug
    

    This should destroy all resources created for cluster. If you have provisioned other resources in the generated subnet, the destroy command will fail.

    Notes

    1. You can use pre-provisioned VPC see https://docs.openshift.com/container-platform/4.12/installing/installing_ibm_cloud_public/installing-ibm-cloud-vpc.html#installing-ibm-cloud-vpc
    2. Cloud credential request – An admin will have to create these for you, and as such, you’ll need to copy them over to the right locations in manifests/
    3. use --log-level debug with the installer to inspect the run.

    References

    1. installing on ibm cloud vpc
    2. create service id
    3. Exporting the IBM Cloud VPC API key
  • Weekly Notes

    There are so many interesting things to share:

    1. google/go-containerregistry has some super helpful tools, in fact I raised a PR to make sure they build ppc64le binaries #1680

    crane is a tool for interacting with remote images and registries.

    You can extract a binary my-util for a given architecture using:

    crane export ppc64le/image-id:tag image.tar
    tar xvf image.tar bin/my-util
    

    You can extract a binary from a manifest-listed image using:

    crane export --platform ppc64le image-id:tag image.tar
    tar xvf image.tar bin/my-util
    
    1. I found ko which enables multiarch builds (a complete manifest list image).
    2. Quickly checking manifest-list image’s supported architectures
    podman manifest inspect registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 | jq -r '.manifests[].platform.architecture'
    amd64
    arm
    arm64
    ppc64le
    s390x
    
    1. My team tagged new releases for:

    a. IBM/powervs-tang-server-automation: v1.0.4 b. IBM/powervm-tang-server-automation: v1.0.0

  • Development Notes

    Here are some things I found interesting this week:

    Day-0 Day-1 Day-2 Definitions

    day-0: customized installation
    day-1: customization performed only once after installing a cluster,
    day-2: tasks performed multiple times during the life of a cluster

    Thanks to a Red Hat colleague for this wonderful definition.

    sfdisk tips

    I used sfdisk in a PowerVM project. I found these commands helpful

    # sfdisk --json /dev/mapper/mpatha
    {
       "partitiontable": {
          "label": "dos",
          "id": "0x14fc63d2",
          "device": "/dev/mapper/mpatha",
          "unit": "sectors",
          "partitions": [
             {"node": "/dev/mapper/mpatha1", "start": 2048, "size": 8192, "type": "41", "bootable": true},
             {"node": "/dev/mapper/mpatha2", "start": 10240, "size": 251647967, "type": "83"}
          ]
       }
    }
    
    # sfdisk --dump /dev/mapper/mpatha
    label: dos
    label-id: 0x14fc63d2
    device: /dev/mapper/mpatha
    unit: sectors
    
    /dev/mapper/mpatha1 : start=        2048, size=        8192, type=41, bootable
    /dev/mapper/mpatha2 : start=       10240, size=   251647967, type=83
    https://www.computerhope.com/unix/sfdisk.htm

    Red Hat OpenShift Container Platform IPI Config for IBM Cloud VPC

    I generated an example configuration

    additionalTrustBundlePolicy: Proxyonly
    apiVersion: v1
    baseDomain: ocp-power.xyz
    credentialsMode: Manual
    compute:
    - architecture: amd64
      hyperthreading: Enabled
      name: worker
      platform:
        ibmcloud:
          zones:
          - jp-osa-1 
      replicas: 3
    controlPlane:
      architecture: amd64
      hyperthreading: Enabled
      name: master
      replicas: 3
      platform:
        ibmcloud:
          zones:
          - jp-osa-1
    metadata:
      name: rdr-test
    networking:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      machineNetwork:
      - cidr: 10.0.0.0/16
      networkType: OVNKubernetes
      serviceNetwork:
      - 172.30.0.0/16
      zones:
      - jp-osa2-1
    platform:
      ibmcloud:
        region: jp-osa
        resourceGroupName: dev-resource-group
        vpcName: ma-compute-vpc
        controlPlaneSubnets: 
          - ma-compute-sn1 
        computeSubnets: 
          - ma-compute-sn1
    publish: External
    pullSecret: 'XYZWX'
    sshKey: ssh-ed25519 XYZWX
    fips: false
  • Weekly Tips and Notes

    The tips and notes for the week are included, I hope they help you.

    TIP: Check the System Admins on OpenShift

    A quick one to find the cluster-admins…

    ❯ oc --kubeconfig=./openstack-upi/auth/kubeconfig get clusterrolebindings -o json | jq -r '.items[] | select(.metadata.name=="cluster-admins") | .subjects[].name' | sort -u
    system:admin
    system:cluster-admins

    Ref: https://serverfault.com/questions/862728/how-to-list-users-with-role-cluster-admin-in-openshift

    Tip: Can I act as kube-admin?

    I needed to double check if I could act as kube:admin.

    ❯ oc --kubeconfig=./openstack-upi/auth/kubeconfig auth can-i create pod -A
    yes

    Ref: https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/developer-cli-commands.html#oc-auth-can-i

    Blog Post: Advanced debugging techniques for OpenShift Container Platform on Power

    In this blog post, I am showing how to use advanced debugging techniques for OpenShift Container Platform on Power using bpftrace and lsof. This blog post unlocks the steps to debug complicated problems and you can follow these steps to debug the problems in your application or your cluster.

    It’s a solid blog on how to do advanced debugging on Red Hat OpenShift Container Platform on IBM Power.

    Ref: https://community.ibm.com/community/user/powerdeveloper/blogs/gaurav-bankar/2023/04/04/advanced-debugging-techniques-for-openshift-contai

    Tip: Double check the payload / architecture type

    To double check the Payload loaded in your cluster (amd64, multi, arm64, ppc64le, s390x).

    You can run:

    # oc get clusterversion version -o json | jq '.status.conditions[] | select(.type == "ReleaseAccepted")'
    {
      "lastTransitionTime": "2023-04-04T13:27:49Z",
      "message": "Payload loaded version=\"4.13.0-rc.2\" image=\"quay.io/openshift-release-dev/ocp-release@sha256:09178ffe61123dbb6df7b91bea11cbdb0bb1168c4150fca712b170dbe4ad13e9\" architecture=\"Multi\"",
      "reason": "PayloadLoaded",
      "status": "True",
      "type": "ReleaseAccepted"
    }
    

    Blog Post: Open Source Container images for Power now available in IBM Container Registry

    The IBM teams have added support for a variety of Open Source tools, and you can pull them from the ppc64le-oss registry.

    The IBM Linux on Power team is pleased to announce that we are centralizing our public open source container images in the IBM Container Registry (ICR). This should assure end users that IBM has authentically built these containers in a secure environment. Formerly, the indicator that a container was built by IBM was that they were in Docker Hub under the ibmcom namespace. The migration to the IBM Container Registry will add clarity to their origin.

    mongodb
    4.4.18
    docker pull icr.io/ppc64le-oss/mongodb-ppc64le:4.4.18
    4.4.17
    docker pull icr.io/ppc64le-oss/mongodb-ppc64le:4.4.17
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Tip: findmnt / sfdisk are super helpful.

    Thanks to linode I learned about findmnt and sfdisk

    # Declare native filesystem and reload with partprobe
    echo 'type=83' | sfdisk ${storage_device} || partprobe
    
    # Format the disk
    mkfs.xfs "${storage_device}1"
    
    # Mount the fs to our storage folder.
    mount -t xfs /dev/mapper/mpathb1 /<<Please replace with actual>
    
    # findmnt
    TARGET                         SOURCE      FSTYPE      OPTIONS
    /                              /dev/sda3   xfs         rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota
    |-/proc                        proc        proc        rw,nosuid,nodev,noexec,relatime
    | `-/proc/sys/fs/binfmt_misc   systemd-1   autofs      rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=10696
    |   `-/proc/sys/fs/binfmt_misc binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime
    |-/sys                         sysfs       sysfs       rw,nosuid,nodev,noexec,relatime,seclabel
    | |-/sys/kernel/security       securityfs  securityfs  rw,nosuid,nodev,noexec,relatime
    | |-/sys/fs/cgroup             cgroup2     cgroup2     rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot
    | |-/sys/fs/pstore             pstore      pstore      rw,nosuid,nodev,noexec,relatime,seclabel
    | |-/sys/fs/bpf                none        bpf         rw,nosuid,nodev,noexec,relatime,mode=700
    | |-/sys/fs/selinux            selinuxfs   selinuxfs   rw,nosuid,noexec,relatime
    | |-/sys/kernel/tracing        tracefs     tracefs     rw,nosuid,nodev,noexec,relatime,seclabel
    | |-/sys/kernel/debug          debugfs     debugfs     rw,nosuid,nodev,noexec,relatime,seclabel
    | |-/sys/kernel/config         configfs    configfs    rw,nosuid,nodev,noexec,relatime
    | `-/sys/fs/fuse/connections   fusectl     fusectl     rw,nosuid,nodev,noexec,relatime
    |-/dev                         devtmpfs    devtmpfs    rw,nosuid,seclabel,size=7779520k,nr_inodes=121555,mode=755,inode64
    | |-/dev/shm                   tmpfs       tmpfs       rw,nosuid,nodev,seclabel,inode64
    | |-/dev/pts                   devpts      devpts      rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000
    | |-/dev/mqueue                mqueue      mqueue      rw,nosuid,nodev,noexec,relatime,seclabel
    | `-/dev/hugepages             hugetlbfs   hugetlbfs   rw,relatime,seclabel,pagesize=16M
    |-/run                         tmpfs       tmpfs       rw,nosuid,nodev,seclabel,size=3126208k,nr_inodes=819200,mode=755,inode64
    | `-/run/user/0                tmpfs       tmpfs       rw,nosuid,nodev,relatime,seclabel,size=1563072k,nr_inodes=390768,mode=700,inode64
    |-/var/lib/nfs/rpc_pipefs      rpc_pipefs  rpc_pipefs  rw,relatime
    `-/boot                        /dev/sdb2   xfs         rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota