Blog

  • Lessons and Notes for the Week

    Here are some lessons learned from the week (and shared with other):

    TIP: How to create a Power Workspace from the CLI

    This document outlines the steps necessary to create a PowerVS workspace.

    1. Login to IBM Cloud

    ibmcloud login --sso -r jp-osa -c 65b64c111111114bbfbd893c2c
    
    • -r jp-osa targets the region
    • -c 65b64c111111114bbfbd893c2c is the account that is being targetted.

    2. Install the Plugins for PowerVS and VPC.

    As a good practice install, the powervs and vpc plugins.

    ❯ ibmcloud plugin install power-iaas -f
    ❯ ibmcloud plugin install 'vpc-infrastructure' -f 
    

    3. Create a Power Systems workspace

    1. The two elements you should configure below are the PVS_REGION and the WORKSPACE_NAME. The rest will create a new Workspace.
    WORKSPACE_NAME=rdr-mac-osa-n1
    SERVICE_NAME=power-iaas
    RESOURCE_GROUP_NAME=my-resource-group
    SERVICE_PLAN_NAME=power-virtual-server-group
    PVS_REGION=osa21
    ❯ ibmcloud resource service-instance-create \
        "${WORKSPACE_NAME}" \
        "${SERVICE_NAME}" \
        "${SERVICE_PLAN_NAME}" \
        "${PVS_REGION}" \
        -g "${RESOURCE_GROUP_NAME}"
    Creating service instance rdr-mac-osa-n1 in resource group my-resource-group of account Power Account as pb@ibm.xyz...
    OK
    Service instance rdr-mac-osa-n1 was created.
                      
    Name:             rdr-mac-osa-n1
    ID:               crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460e8c2e4bbfbd893c2c:e3772ff7-48eb-4c81-9ee0-07cf0b5547a5::
    GUID:             e3772ff7-48eb-4c81-9ee0-07cf0b5547a5
    Location:         osa21
    State:            provisioning
    Type:             service_instance
    Sub Type:         
    Allow Cleanup:    false
    Locked:           false
    Created at:       2023-07-11T14:12:07Z
    Updated at:       2023-07-11T14:12:10Z
    Last Operation:             
                      Status    create in progress
                      Message   Started create instance operation

    Thus you are able to create a PowerVS Workspace from the CLI.

    Note: Flatcar Linux

    Flatcar Container Linux A community Linux distribution designed for container workloads, with high security and low maintenance

    https://www.flatcar.org/

    I learned about flatcar linux this week… exciting stuff.

    Demonstration: Tang Server Automation on PowerVM

    I work on the OpenShift Container Platform on Power Team which created a few video to discuss The powervm-tang-server-automation project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management. Thanks to Aditi Jadhav for presenting.

    The Overview

    The Demonstration

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/12/tang-server-automation-on-powervm

    New Blog: Installing OpenShift on IBM Power Virtual Servers with IPI

    My colleague Ashwin posted a new blog using IPI – This blog covers creating and destroying a “public” OpenShift cluster i.e., one which is accessible through the internet and the nodes of the cluster can access the internet. 

    https://community.ibm.com/community/user/powerdeveloper/blogs/ashwin-hendre/2023/07/10/powervs-ipi

  • Notes for the Week

    Here are some notes from the week:

    The Network Observability Operator is now released on IBM Power Systems.

    Network observability operator now available on Power.

    Network Observability (NETOBSERV) for IBM Power, little endian 1 for RHEL 9 ppc64le

    https://access.redhat.com/errata/RHSA-2023:3905?sc_cid=701600000006NHXAA2 https://community.ibm.com/community/user/powerdeveloper/discussion/network-observability-operator-now-available-on-power
  • Notes from the Week

    A few things I learned this week are:

    There is a cool session on IPI PowerVS for OpenShift called Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers Webinar.

    Did you know that you can run Red Hat OpenShift clusters on IBM Power servers? Maybe you do, but you don’t have Power hardware to try it out on, or you don’t have time to learn about OpenShift using the User-Provisioned method of installation. Let us introduce you to the Installer-Provisioned Installation method for OpenShift clusters, also called “an IPI install” on IBM Power Virtual Servers. IPI installs are much simpler than UPI installs, because, the installer itself has built-in logic that can provision each and every component your cluster needs.

    Join us on 27 July at 10 AM ET for this 1-hour live webinar to learn why the benefits of the IPI installation method goes well beyond installation and into the cluster lifecycle. We’ll show you how to deploy OpenShift IPI on Power Virtual Server with a live demo. And finally, we’ll share some ways that you can try it yourself. Please share any questions by clicking on the Reply button. If you have not done so already, register to join here and get your calendar invite.

    https://community.ibm.com/community/user/powerdeveloper/discussion/introducing-red-hat-openshift-installer-provisioned-installation-ipi-for-ibm-power-virtual-servers-webinar

    Of all the things, I finally stated using reverse-search in the shell. CTRL+R on the commandline. link or link

    The IBM Power Systems team announced a Tech Preview of Red Hat Ansible Automation Platform on IBM Power

    Continuing our journey to enable our clients’ automation needs, IBM is excited to announce the Technical Preview of Ansible Automation Platform running on IBM Power! Now, in addition to automating against IBM Power endpoints (e.g., AIX, IBM i, etc.), clients will be able to run Ansible Automation Platform components on IBM Power. In addition to IBM Power support for Ansible Automation Platform, Red Hat is also providing support for Ansible running on IBM Z Systems. Now, let’s dive into the specifics of what this entails.

    My team released a new version of the PowerVM Tang Server Automation to fix a routing problem:

    The powervm-tang-server-automation project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management.

    https://github.com/IBM/powervm-tang-server-automation/tree/v1.0.1

    The RH Ansible/IBM teams have released Red Hat Ansible Lightspeed with IBM Watson Code I’m excited to try it out and expand my Ansible usage.

    It’s great to see the Kernel Module Manager release 1.1 with Day-1 support through KMM

    The Kernel Module Management Operator manages out of tree kernel modules in Kubernetes.

    https://github.com/kubernetes-sigs/kernel-module-management

  • Tidbits – Terraform and Multiarch

    Here is a compendium of tidbits from the week:

    FYI: Terraform v1.5.0 is released. It’s probably worth updating. link to ppc64le build

    ❯ brew upgrade terraform

    There are new features:

    check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. Adds a new strcontains function that checks whether a given string contains a given substring. (#33069)

    https://github.com/hashicorp/terraform/releases/tag/v1.5.0

    Blog: Build multi-architecture container images on GitHub Actions using native nodes

    My colleague, Yussuf, has posted a blog on using GitHub actions with Native Nodes. It’s to the point, and super helpful. link

    There are already a few good blogs[1][2][3] available in this group that demonstrates how to build multi-arch container images on GitHub Actions using Buildx. However, they are using QEMU which is a free and open-source emulator for running cross-builds. Using QEMU presents its own problems where the main point is the slowness which cannot match when we run the builds natively. Even there is no guarantee that the build will always succeed when we use low-level code in the project. These pain points forced us to use native nodes as part of the same Buildx workflow inside a GitHub Action. If you as well want to use native nodes to build your projects on multiple architectures including ppc64le then this article is for you.

    https://community.ibm.com/community/user/powerdeveloper/blogs/yussuf-shaikh/2023/06/14/workflow-using-buildx-remote-builders

    Tip: Custom Build of QEMU on Centos 9

    I had to do a cross-build of ARM64, so I used QEMU. It’s not necessarily straight-forward on CENTOS9, here are the steps I took:

    1. Connect to my machine.
    ❯ ssh cloud-user@<machine IP>
    1. Switch to Root
    ❯ sudo -s 
    1. Enable the Code Ready Builders repo
    ❯ dnf config-manager --set-enabled crb
    1. install a bunch of dependencies
    ❯ dnf install -y git make pip vim ninja-build gcc glib2-devel.x86_64 pixman.x86_64 libjpeg-devel giflib-devel pixman-devel cairo-devel pango-devel qemu-kvm edk2-aarch64
    1. Install Python dependencies
    ❯ pip install Sphinx sphinx-rtd-theme
    1. Pull the Qemu Code
    ❯ git clone https://git.qemu.org/git/qemu.git
    1. Change to QEMU
    ❯ cd qemu
    1. Apply the patch
    From 14920d35f053c8effd17a232b5144ef43465a85e Mon Sep 17 00:00:00 2001
    From: root <root@cross-build-pbastide.novalocal>
    Date: Tue, 20 Jun 2023 10:02:39 -0400
    Subject: [PATCH] test
    
    ---
     configure | 3 ++-
     1 file changed, 2 insertions(+), 1 deletion(-)
    
    diff --git a/configure b/configure
    index 01a53576a7..02cabb556b 100755
    --- a/configure
    +++ b/configure
    @@ -371,6 +371,7 @@ else
       # be the result of a missing compiler.
       targetos=bogus
     fi
    +targetos=linux
     
     # OS specific
     
    @@ -508,7 +509,7 @@ case "$cpu" in
       sparc64)
         CPU_CFLAGS="-m64 -mcpu=ultrasparc" ;;
     esac
    -
    +CPU_CFLAGS="-m64 -mcx16"
     check_py_version() {
         # We require python >= 3.7.
         # NB: a True python conditional creates a non-zero return code (Failure)
    -- 

    The above patch is generated from git format-patch -1 HEAD.

    1. Create the build directory
    ❯ mkdir -p build
    1. Configure the qemu build
    ❯ ./configure --target-list=aarch64-softmmu --enable-virtfs --enable-slirp
    1. Build with all your processors.
    ❯ make -j`nproc`
    1. The file should exist and be in your native architecture.
    ❯ file ./build/qemu-system-aarch64
    ./build/qemu-system-aarch64: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=1871423864de4428c4721b8e805304f0142bed6a, for GNU/Linux 3.2.0, with debug_info, not stripped
    1. Download Centos qcow2 from link
    ❯ curl -O https://cloud.centos.org/centos/9-stream/aarch64/images/CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2
    1. Update the .ssh/authorized_keys
    modprobe nbd
    qemu-nbd -c /dev/nbd0 CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2
    mount /dev/nbd0p1 /mnt
    mkdir -p /mnt/root/.ssh/
    1. Edit the keys to add your public key.
    vim /mnt/root/.ssh/authorized_keys
    1. Unmount the qcow2 mount
    umount /mnt
    qemu-nbd -d /dev/nbd0
    1. Start the QEMU vm.
    build/qemu-system-aarch64 -m 4G -M virt -cpu cortex-a57 \
      -bios /usr/share/edk2/aarch64/QEMU_EFI.fd \
      -drive if=none,file=CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2,id=hd0 \
      -device virtio-blk-device,drive=hd0 \
      -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp:127.0.0.1:5555-:22 \
      -nographic

    Finally, you’ll see:

    CentOS Stream 9
    Kernel 5.14.0-115.el9.aarch64 on an aarch64
    
    Activate the web console with: systemctl enable --now cockpit.socket
    
    cross-build-pbastide login: 

    It’s ready-to-go.

    References
    1. http://cdn.kernel.org/pub/linux/kernel/people/will/docs/qemu/qemu-arm64-howto.html
    2. https://fedoraproject.org/wiki/Architectures/AArch64/Install_with_QEMU
    3. https://wiki-archive.linaro.org/LEG/UEFIforQEMU
    4. https://packages.debian.org/experimental/qemu-efi-aarch64
    5. https://www.redhat.com/sysadmin/install-epel-linux
    6. https://wiki.qemu.org/Hosts/Linux
    7. https://github.com/Automattic/node-canvas/issues/1065#issuecomment-1278496824
    8. https://wiki.debian.org/Arm64Qemu
  • A few more notes from the week

    A few things I learned about this week are:

    IBM Redbooks: Implementing, Tuning, and Optimizing Workloads with Red Hat OpenShift on IBM Power

    A new document provides hints and tips about how to install your Red Hat OpenShift cluster, and also provide guidance about how to size and tune your environment. I’m reading through it now – and excited.

    Upcoming Webinar: Powering AI Innovation: Exploring IBM Power with MMA and ONNX on Power10 Featuring Real Time Use Cases

    The session is going to explore showcase the impressive capabilities of MMA (Matrix Math Accelerator) on the cutting-edge Power10 architecture.

    CSI Cinder Configuration for a different availability zone

    I had a failed install on OpenStack with Power9 KVM, and I had to redirect the Image Registry to use a different operator. Use the following storage class, you’ll have to change the default and names.

    allowVolumeExpansion: true
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
      name: standard-csi-new
    provisioner: cinder.csi.openstack.org
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    parameters:
      availability: nova
    

    If you need to change the default-class, then:

    oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
    

    TIP: openshift-install router quota check

    FATAL failed to fetch Cluster: failed to fetch dependency of “Cluster”: failed to generate asset “Platform Quota Check”: error(MissingQuota): Router is not available because the required number of resources (1) is more than remaining quota of 0

    Then check the quota for the number of routers. You probably need to remove some old ones.

    # openstack --os-cloud openstack quota show | grep router
    | routers | 15 |
  • A few weeks of notes

    I’ve been working hard on multiarch enablement for various OpenShift features. Here are a few notes from the last few weeks:

    New Blog on Cluster API

    Delve into the powerful capabilities of Cluster API and how it enables effortless K8s cluster deployment on PowerVC: https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/05/31/simplifying-k8s-cluster-deployment-leveraging-capi 

    Prajyot Parat from the Kubernetes/OpenShift on Power team

    It’s a really helpful and interesting solution to deploying your cluster.

    TIP: OpenShift Installer Provisioned Infrastructure on IBM Cloud

    I installed a new cluster with the openshift-installer using IPI on IBM Cloud with a pre-defined VPC with predefined networks. If your install hangs and fails mysteriously after 30-40 minutes with three provisioned RHCOS nodes trying to call out to quay.io, it could point to the Public Gateway for the network not being enabled so it can call back to quay.io.

    This issue was tough to debug, and I hope it helps you.

    TIP: scp hangs because of bad mtu

    The scp command opens the channel and hangs…

    scp -vvv -i data/id_rsa sample.txt root@1.1.1.1:/tmp
    ...
    debug1: channel 0: free: client-session, nchannels 1
    debug3: channel 0: status: The following connections are open:
      #0 client-session (t4 r0 i0/0 o0/0 e[write]/0 fd 6/7/8 sock -1 cc -1)
    

    You can go check the optimal MTU to send to the destination.

    # ping 1.1.1.1 -c 10 -M do -s 1499
    PING 1.1.1.1 (1.1.1.1) 1499(1527) bytes of data.
    ping: local error: Message too long, mtu=1500
    ping: local error: Message too long, mtu=1500
    ping: local error: Message too long, mtu=1500
    ping: local error: Message too long, mtu=1500
    

    Then per the link https://unix.stackexchange.com/questions/14187/why-does-scp-hang-on-copying-files-larger-than-1405-bytes

    ip link set eth0 mtu 1400

    Then it’ll work.

    The above will help when scp hangs.

    Blog: Using the oc-compliance plugin on the OpenShift Container Platform on Power

    My team has added support for oc-compliance on OpenShift Container Platform on IBM Power, and in this post, I’m sharing the download, the setup, and using the tool in the cluster. 

    https://community.ibm.com/community/user/powerdeveloper/blogs/aditi-jadhav/2023/06/07/using-the-oc-compliance-plugin-on-the-openshift-co

    The oc-compliance plugin is super helpful, and my colleague Aditi has created a new blog on oc-compliance.

    Blog: Configuring Seccomp Profile on OpenShift Container Platform for Security and Compliance on Power

    A blog on using seccomp with OCP4.

    https://medium.com/@aditijadhav38/configuring-seccomp-profile-on-openshift-container-platform-for-security-and-compliance-on-power-d94907f4b1f9

    My teammate Aditi updated for 4.12 and 4.13 (surprise no changes, which is good).

  • Weekly Notes

    Here are my weekly learnings and notes:

    Podman Desktop updates v1.0.1

    Podman Desktop is an open source graphical tool enabling you to seamlessly work with containers and Kubernetes from your local environment.

    In a cool update, the Podman Desktop team added support for OpenShift Local in v1.0.1 and Kind clusters are already there. We can do some advanced stuff. You may have to download extensions and upgrade Podman to v4.5.0.

    ❯ brew upgrade podman-desktop
    ...
    🍺  podman-desktop was successfully upgraded!
    

    Skupper… interesting

    Skupper is a layer 7 service interconnect. It enables secure communication across Kubernetes clusters with no VPNs or special firewall rules.

    There is a new layer-7 interconnect. There is a sample

    Red Hat OpenShift Container Platform 4.13.0 is generally available

    I’ve been working on the product for 4.13.0 – oc new-app and new-build support.

    Podman Cheat Sheet

    Podman Cheat Sheet covers all the basic commands for managing images, containers, and container resources. Super helpful for those stuck finding the right command to build/manage or run your container.

    File Integrity Operator: Using File Integrity Operator to support file integrity checks on OpenShift Container Platform on Power

    My colleague has published a blog on File Integrity Operator.

    As part of this series, I have written a blog on PCI-DSS and the Compliance Operator to have a secure and compliant cluster. Part of the cluster’s security and compliance depends on the File Integrity Operator – an operator that uses intrusion detection rules to verify the integrity of files and directories on cluster’s nodes. 

    https://community.ibm.com/community/user/powerdeveloper/blogs/aditi-jadhav/2023/05/24/using-file-integrity-operator-to-support-file-inte
  • Weekly Notes

    Here are my notes from the week:

    1. Subnet to CIDR block Cheat Sheet
    2. OpenShift Installer Provisioned Infrastructure for IBM Cloud VPC

    rfc1878: Subnet CIDR Cheat Sheet

    I found a great cheat sheet for CIDR subnet masks.

       Mask value:                             # of
       Hex            CIDR   Decimal           addresses  Classfull
       80.00.00.00    /1     128.0.0.0         2048 M     128 A
       C0.00.00.00    /2     192.0.0.0         1024 M      64 A
       E0.00.00.00    /3     224.0.0.0          512 M      32 A
       F0.00.00.00    /4     240.0.0.0          256 M      16 A
       F8.00.00.00    /5     248.0.0.0          128 M       8 A
       FC.00.00.00    /6     252.0.0.0           64 M       4 A
       FE.00.00.00    /7     254.0.0.0           32 M       2 A
       FF.00.00.00    /8     255.0.0.0           16 M       1 A
       FF.80.00.00    /9     255.128.0.0          8 M     128 B
       FF.C0.00.00   /10     255.192.0.0          4 M      64 B
       FF.E0.00.00   /11     255.224.0.0          2 M      32 B
       FF.F0.00.00   /12     255.240.0.0       1024 K      16 B
       FF.F8.00.00   /13     255.248.0.0        512 K       8 B
       FF.FC.00.00   /14     255.252.0.0        256 K       4 B
       FF.FE.00.00   /15     255.254.0.0        128 K       2 B
       FF.FF.00.00   /16     255.255.0.0         64 K       1 B
       FF.FF.80.00   /17     255.255.128.0       32 K     128 C
       FF.FF.C0.00   /18     255.255.192.0       16 K      64 C
       FF.FF.E0.00   /19     255.255.224.0        8 K      32 C
       FF.FF.F0.00   /20     255.255.240.0        4 K      16 C
       FF.FF.F8.00   /21     255.255.248.0        2 K       8 C
       FF.FF.FC.00   /22     255.255.252.0        1 K       4 C
       FF.FF.FE.00   /23     255.255.254.0      512         2 C
       FF.FF.FF.00   /24     255.255.255.0      256         1 C
       FF.FF.FF.80   /25     255.255.255.128    128       1/2 C
       FF.FF.FF.C0   /26     255.255.255.192     64       1/4 C
       FF.FF.FF.E0   /27     255.255.255.224     32       1/8 C
       FF.FF.FF.F0   /28     255.255.255.240     16      1/16 C
       FF.FF.FF.F8   /29     255.255.255.248      8      1/32 C
       FF.FF.FF.FC   /30     255.255.255.252      4      1/64 C
       FF.FF.FF.FE   /31     255.255.255.254      2     1/128 C
       FF.FF.FF.FF   /32     255.255.255.255      1

    Thanks to the following sites for the clue to the rfc and the rfc.

    Mutating WebHook to add Node Selectors

    Thanks to these sites

    1. hmcts/k8s-env-injector provided inspiration for this approach and updates the code patterns for the latest kubernetes versions.
    2. phenixblue/imageswap-webhook provided the python based pattern for this approach.
    3. Kubernetes: MutatingAdmissionWebhook

    I added some code to add annotations and nodeSelectors https://github.com/prb112/openshift-demo/tree/main/mutating

    Installing OpenShift install provisioned infrastructure on IBM Cloud VPC

    This document outlines installing the IPI IBMCloud using the openshift-installer.

    As of OpenShift 4.13, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud VPC. The installation program provisions the required infrastructure, which you can then further customize.

    This document describes the creation of OCP cluster using IPI (Installer Provisioned Infrastructure) on exiting IBM Cloud VPC.

    This setup is used with the day-2 operations on PowerVS to make a multiarch compute cluster.

    1. Create IBM API Key
    2. Create the IAM Services
    3. Pick your build
    4. Deploy

    1. Create IBM API Key

    1. Navigate to API keys iam – api keys
    2. Click Create
    3. Enter name rdr-demo
    4. Click Create
    5. Copy your API key, it’ll be used later on.

    2. Create the IAM Services

    1. Navigate to Service Ids iam – serviceids
    2. click create service id with name rdr-demo to identify your team.
    3. assign access
    Internet Services	All	Viewer, Operator, Editor, Reader, Writer, Manager, Administrator		--	
    	
    Cloud Object Storage	All	Viewer, Operator, Editor, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Administrator		--	
    	
    IAM Identity Service	All	Viewer, Operator, Editor, Administrator, ccoctlPolicy, policycreate		--	
    	
    Resource group only	ocp-dev-resource-group resource group	Viewer, Administrator, Editor, Operator		--	
    	
    VPC Infrastructure Services	All	Viewer, Operator, Editor, Reader, Writer, Administrator, Manager
    

    3. Pick your build

    I used 4.13.0-rc.7.

    4. Deploy

    1. Connect to your jumpserver or bastion where you are doing the deployment.

    Tip: it’s worth having tmux installed for this install (it’ll take about 1h30m)

    1. Export the API KEY you created above
    ❯ export IC_API_KEY=<REDACTED>
    
    1. Create a working folder
    ❯ mkdir -p ipi-vpc-414-rc7
    ❯ cd ipi-vpc-414-rc7
    
    1. Download the installers and extract to the binary folder.
    ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/ccoctl-linux.tar.gz
    ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/openshift-client-linux.tar.gz
    ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/openshift-install-linux.tar.gz
    ❯ tar xvf ccoctl-linux.tar.gz --dir /usr/local/bin/
    ❯ tar xvf openshift-client-linux.tar.gz --dir /usr/local/bin/
    ❯ tar xvf openshift-install-linux.tar.gz --dir /usr/local/bin/
    
    1. Verify the openshift-install version is correct.
    ❯ openshift-install version
    openshift-install 4.13.0-rc.7
    built from commit 3e0b2a2ec26d9ffcca34b361896418499ad9d603
    release image quay.io/openshift-release-dev/ocp-release@sha256:aae5131ec824c301c11d0bf11d81b3996a222be8b49ce4716e9d464229a2f92b
    release architecture amd64
    
    1. Copy over your pull-secret.

    a. Login with your Red Hat id

    b. Navigate to https://console.redhat.com/openshift/install/ibm-cloud 

    c. Scroll down the page and copy the pull-secret.

    This pull-secret should work for you and save for later as pull-secret.txt in the working directory.

    1. Extract the CloudControlsRequest objects and create the credentials.
    RELEASE_IMAGE=$(openshift-install version | awk '/release image/ {print $3}')
    oc adm release extract --cloud=ibmcloud --credentials-requests $RELEASE_IMAGE --to=rdr-demo
    ccoctl ibmcloud create-service-id --credentials-requests-dir rdr-demo --output-dir rdr-demo-out --name rdr-demo --resource-group-name ocp-dev-resource-group
    
    1. Create the install-config
    ❯ openshift-install create install-config --dir rc7_2
    ? SSH Public Key /root/.ssh/id_rsa.pub                                                                     
    ? Platform ibmcloud                                                                                        
    ? Region jp-osa                                                                                            
    ? Base Domain ocp-multiarch.xyz (rdr-multi-is)                                                             
    ? Cluster Name rdr-multi-pb                                                                                
    ? Pull Secret [? for help] ********************************************************************************
    ***********************************
    INFO Manifests created in: rc7_1/manifests and rc7_1/openshift
    
    1. Edit the install-config.yaml to add resourceGroupName
    platform:
      ibmcloud:
        region: jp-osa
        resourceGroupName: my-resource-group 
    
    1. Copy the generated ccoctl manifests over.
    ❯ cp rdr-demo-out/manifests/* rc7_1/manifests/
    
    1. Create the manifests.
    ❯ openshift-install create manifests --dir=rc7_1
    INFO Consuming OpenShift Install (Manifests) from target directory
    INFO Manifests created in: rc7_1/manifests and rc7_1/openshift
    
    1. Create the cluster.
    ❯ openshift-install create cluster --dir=rc7_3
    INFO Consuming Worker Machines from target directory
    INFO Consuming Common Manifests from target directory
    INFO Consuming Openshift Manifests from target directory
    INFO Consuming OpenShift Install (Manifests) from target directory
    INFO Consuming Master Machines from target directoryINFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.13-9.2/builds/413.92.202305021736-0/x86_64/rhcos-413.92.202305021736-0-ibmcloud.x86_64.qcow2.gz?sha256=222abce547c1bbf32723676f4977a3721c8a3788f0b7b6b3496b79999e8c60b3'                                   
    INFO The file was found in cache: /root/.cache/openshift-installer/image_cache/rhcos-413.92.202305021736-0-ibmcloud.x86_64.qcow2. Reusing...           INFO Creating infrastructure resources...
    INFO Waiting up to 20m0s (until 12:09PM) for the Kubernetes API at https://api.xyz.ocp-multiarch.xyz:6443... 
    INFO API v1.26.3+b404935 up                       
    INFO Waiting up to 30m0s (until 12:19PM) for bootstrapping to complete... 
    INFO Destroying the bootstrap resources...        
    INFO Waiting up to 40m0s (until 12:41PM) for the cluster at https://api.xyz.ocp-multiarch.xyz:6443 to initialize... 
    INFO Checking to see if there is a route at openshift-console/console... 
    INFO Install complete!                            
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ipi-vpc-414-rc7/rc7_3/auth/kubeconfig' 
    INFO Access the OpenShift web-console here: 
    INFO Login to the console with user: "kubeadmin", and password: "xxxxxxxxx-wwwwww-xxxx-aas" 
    INFO Time elapsed: 1h28m9s      
    
    1. Verify the cluster

    a. set kubeconfig provided by installation

    export KUBECONFIG=$(pwd)/rc7_1/auth/kubeconfig
    

    b. Check the nodes are Ready

    ❯  oc get nodes
    NAME                                    STATUS   ROLES          AGE     		VERSION
    rdr-multi-ca-rc6-tplwd-master-0             Ready    control-plane,master  5h13m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-master-1             Ready    control-plane,master  5h13m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-master-2             Ready    control-plane,master  5h13m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-worker-1-pfqjx  Ready    worker                 	4h47m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-worker-1-th8j4  Ready    worker                 4h47m   v1.26.3+b404935
    rdr-multi-ca-rc6-tplwd-worker-1-xl75m Ready    worker                 4h53m   v1.26.3+b404935
    

    c. Check Cluster Operators

    ❯ oc get co
    NAME                                       	VERSION       AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    authentication                             	4.13.0-rc.6   True        False         False      4h43m
    baremetal                              	4.13.0-rc.6   True        False         False      5h5m
    cloud-controller-manager            4.13.0-rc.6   True        False         False      5h13m
    cloud-credential                           	4.13.0-rc.6   True        False         False      5h18m
    cluster-autoscaler                         	4.13.0-rc.6   True        False         False      5h5m
    config-operator                      	4.13.0-rc.6   True        False         False      5h7m
    console                           	       	4.13.0-rc.6   True        False         False      4h47m
    control-plane-machine-set     	4.13.0-rc.6   True        False         False      5h5m
    csi-snapshot-controller                 4.13.0-rc.6   True        False         False      4h54m
    dns                                        	4.13.0-rc.6   True        False         False      4h54m
    etcd                                       	4.13.0-rc.6   True        False         False      4h57m
    image-registry                             	4.13.0-rc.6   True        False         False      4h50m
    ingress                                    	4.13.0-rc.6   True        False         False      4h51m
    insights                                   	4.13.0-rc.6   True        False         False      5h
    kube-apiserver                             	4.13.0-rc.6   True        False         False      4h53m
    kube-controller-manager             4.13.0-rc.6   True        False         False      4h53m
    kube-scheduler                             	4.13.0-rc.6   True        False         False      4h52m
    kube-storage-version-migrator   4.13.0-rc.6   True        False         False      4h54m
    machine-api                                	4.13.0-rc.6   True        False         False      4h48m
    machine-approver                         4.13.0-rc.6   True        False         False      5h5m
    machine-config                             	4.13.0-rc.6   True        False         False      5h6m
    marketplace                                	4.13.0-rc.6   True        False         False      5h5m
    monitoring                                 	4.13.0-rc.6   True        False         False      4h45m
    network                                    	4.13.0-rc.6   True        False         False      5h8m
    node-tuning                                	4.13.0-rc.6   True        False         False      4h54m
    openshift-apiserver                       4.13.0-rc.6   True        False         False      4h47m
    openshift-controller-manager     4.13.0-rc.6   True        False         False      4h54m
    openshift-samples                         4.13.0-rc.6   True        False         False      4h50m
    operator-lifecycle-manager         4.13.0-rc.6   True        False         False      5h6m
    operator-lifecycle-manager-catalog         4.13.0-rc.6   True        False         False      5h6m
    operator-lifecycle-manager-packageserver   4.13.0-rc.6   True        False         False      4h51m
    service-ca                                 	4.13.0-rc.6   True        False         False      5h7m
    storage                                    	4.13.0-rc.6   True        False         False      4h51m
    

    Note – Confirm that all master/worker nodes and operators are running healthy and true.

    1. Verify the browser login

    A. Open Browser and Login to Console URL using available credentials. e.g.,

    URL - https://console-openshift-console.apps.xxxxxx.ocp-multiarch.xyz
    	Username – kubeadmin
    	Password - <Generated Password>
    
    1. destroy cluster Fire below mentioned command to destroy cluster by specifying installation directory.
    ❯ ./openshift-install destroy cluster --dir  ocp413-rc6 --log-level=debug
    

    This should destroy all resources created for cluster. If you have provisioned other resources in the generated subnet, the destroy command will fail.

    Notes

    1. You can use pre-provisioned VPC see https://docs.openshift.com/container-platform/4.12/installing/installing_ibm_cloud_public/installing-ibm-cloud-vpc.html#installing-ibm-cloud-vpc
    2. Cloud credential request – An admin will have to create these for you, and as such, you’ll need to copy them over to the right locations in manifests/
    3. use --log-level debug with the installer to inspect the run.

    References

    1. installing on ibm cloud vpc
    2. create service id
    3. Exporting the IBM Cloud VPC API key
  • Weekly Notes

    I found these things very helpful this week

    Tip: Creating a Manifest List Image for Acme Airlines

    podman manifest create quay.io/pbastide_rh/openshift-demo:acme-air-414 \
      quay.io/pbastide_rh/openshift-demo:acme-airlines-414-amd64 \
      quay.io/pbastide_rh/openshift-demo:acme-airlines-414-ppc64le \
      quay.io/pbastide_rh/openshift-demo:acme-airlines-414-s390x
    podman manifest push quay.io/pbastide_rh/openshift-demo:acme-air-414 quay.io/pbastide_rh/openshift-demo:acme-air-414
    Copying 3 of 3 images in list
    Copying image sha256:7832d25f2dce210e453eac6b88f9cb3ebd49b8f89fb4861cd23bc19bfb27ab86 (1/3)
    Getting image source signatures
    Copying blob sha256:5b341ad1266c1f4dd6017a2efafe35397a6b9317faf7933137e7368f778bca2a
    Copying config sha256:8ba7bc1af7fe17f17df0707fbc0835392641902a428871fe4d5e4363b56f4ce3
    Writing manifest to image destination
    Storing signatures
    Copying image sha256:511ee668c00c5becd6b979d54161507be493d8c6bb954c287098f3c2fd0db314 (2/3)
    Getting image source signatures
    Copying blob sha256:6087b8848d1db68cc3e10e4ba17aa6d81efb01a1486a59226c579fe6c9eae989
    Copying config sha256:dd9d7a1188a6135654fdc7d503664f381467bc1955fb7700583cf23fa61c1388
    Writing manifest to image destination
    Storing signatures
    Copying image sha256:e48177344209196fd529abb531f2dbc3e414ea0d69f48a7c7838b4725058aed3 (3/3)
    Getting image source signatures
    Copying blob sha256:fd555f115f1f8370328b19ded057891778434997fc4f63717ddfe93786b71f5c
    Copying config sha256:fc4c32ec264db03c9e4b7e441d16826cc1878b6e5a7ef1f432a2005b4687d89d
    Writing manifest to image destination
    Storing signatures
    Writing manifest list to image destination
    Storing list signatures

    Tip: Sock Shop Demo

    Provides a solid microservices demo.

    GitHub https://microservices-demo.github.io/

    Tip: NUMA Background

    There are some issues with NUMA control that make it so much harder to manage. There are some issues with NUMA nodes and container platforms.

    • 1. https://github.com/kubernetes-sigs/node-feature-discovery/issues/84
    • 2. https://www.mongodb.com/docs/manual/administration/production-notes/#mongodb-and-numa-hardware
    • 3. https://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/
  • Weekly Notes

    There are so many interesting things to share:

    1. google/go-containerregistry has some super helpful tools, in fact I raised a PR to make sure they build ppc64le binaries #1680

    crane is a tool for interacting with remote images and registries.

    You can extract a binary my-util for a given architecture using:

    crane export ppc64le/image-id:tag image.tar
    tar xvf image.tar bin/my-util
    

    You can extract a binary from a manifest-listed image using:

    crane export --platform ppc64le image-id:tag image.tar
    tar xvf image.tar bin/my-util
    
    1. I found ko which enables multiarch builds (a complete manifest list image).
    2. Quickly checking manifest-list image’s supported architectures
    podman manifest inspect registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 | jq -r '.manifests[].platform.architecture'
    amd64
    arm
    arm64
    ppc64le
    s390x
    
    1. My team tagged new releases for:

    a. IBM/powervs-tang-server-automation: v1.0.4 b. IBM/powervm-tang-server-automation: v1.0.0