Author: Paul

  • Useful Notes for September and October 2023

    Hi everyone, I’ve been heads down working on Multiarchitecture Compute and the Power platform for IBM.

    How to add /etc/hosts file entries in OpenShift containers

    You can add host aliases into the Pod Definition which is handy if the code is hard coded with a DNS entry.

          hostAliases:
          - ip: "127.0.0.1"
            hostnames:
            - "home"
         - ip: "10.1.x.x"
            hostnames:
            - "remote-host"
    https://access.redhat.com/solutions/3696301

    Infrastructure Nodes in OpenShift 4

    A link to Infra nodes which provide a specific role in the cluster.

    https://access.redhat.com/solutions/5034771

    Multiarchitecture Compute Research

    Calling all IBM Power customers looking to impact Power modernization capabilities. The IBM Power Design Team is facilitating a study to understand customer sentiment toward Multi-Architecture Computing (MAC) and needs your help.

    https://community.ibm.com/community/user/powerdeveloper/blogs/erica-albert/2023/10/11/multi-architecture-computing-research-recruit 

    This is an interesting opportunity to work with customers on IBM Power and OpenShift as they mix the architecture workloads to meet their needs.

  • Weekly Notes

    Here are my weekly notes:

    Flow Connector

    If you are using the VPC, you can track connections between your subnets and your VPC using Flow Connector.

    ❯ find . -name “*.gz” -exec gunzip {} \;

    ❯ grep -Rh 192.168.200.10 | jq -r ‘.flow_logs[] | select(.action == “rejected”) | “\(.initiator_ip),\(.target_ip),\(.target_port)”‘ | sort -u | grep 192.168.200.10

    10.245.0.5,192.168.200.10,36416,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,36430,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,58894,2023-08-08T14:31:32Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:41Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.1.5,192.168.200.10,9100,2023-08-08T14:31:32Z

    10.245.129.4,192.168.200.10,43524,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9100,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9537,2023-08-08T14:50:36Z

    Image Pruner Reports Error….

    You can check the image-registry status on the cluster operator.

    ❯ oc get co image-registry
    image-registry                             4.14.0-ec.4   True        False         True       3d14h   ImagePrunerDegraded: Job has reached the specified backoff limit
    

    The cronjob probably failed, so we can check that it exists.

    ❯ oc get cronjob -n openshift-image-registry
    NAME           SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    image-pruner   0 0 * * *   False     0        16h             3d15h
    

    We can run a one-off to clear the status above.

    ❯ oc create job --from=cronjob/image-pruner one-off-image-pruner -n openshift-image-registry
    job.batch/one-off-image-pruner created
    

    Then your image-registry should be a-ok.

    Ref: https://gist.github.com/ryderdamen/73ff9f93cd61d5dd45a0c50032e3ae03

  • Weekly Notes

    Here are the very cool things I learned this week:

    CRI-O Graduated

    CRI-O has graduated at the CNCF – see the announcement Cloud Native Computing Foundation Announces Graduation of CRI-O. This points to the maturity of Cloud Native runtimes.

    Checking an Ignition on a Failed Instance

    I use PowerVS and had a bad ignition file, so I logged in via the Console in PowerVS. Then run journalctl -xe, and then mount the ignition file and cat it out.

  • Two ways to grab the Ignition for RHCOS/OCP4

    There are two ways to grab the ignition files for the workers in the cluster:

    1. A downloaded ignition file stored (in data folder) using curl:
    • curl -k http://api.demo.ocp-multiarch.xyz:22623/config/worker -o worker.ign -H "Accept: application/vnd.coreos.ignition+json;version=3.2.0"

    2. Download the ignition file using the oc commandline

    • oc extract -n openshift-machine-api secret/worker-user-data --keys=userData --to=-

    I’m adding this because I use it every day, and others might find it helpful.

  • Protected: Webinar: Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers

    This content is password protected. To view it please enter your password below:

  • Krew plugin on ppc64le

    Posted to https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/19/kubernetes-krew-plugin-support-for-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Hey everyone,

    Krew, the kubectl plugin package manager, is now available on Power. The release v0.4.4 has a ppc64le download. You can download and start taking advantage of the krew plugin list. ppc64le download It also works with OpenShift.

    The Krew website has a list of plugins](https://krew.sigs.k8s.io/plugins/). Not all of the plugins support ppc64le, however may are cross-arch scripts and are cross compiled such as view-utilization.

    To take advantage of Krew with OpenShift, here are a few steps

    1. Download the krew-linux plugin
    # curl -L -O https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100 3977k  100 3977k    0     0  6333k      0 --:--:-- --:--:-- --:--:-- 30.5M
    
    1. Extract the krew plugin
    tar xvf krew-linux_ppc64le.tar.gz 
    ./LICENSE
    ./krew-linux_ppc64le
    
    1. Move to the /usr/bin so it’s picked up by oc.
    mv krew-linux_ppc64le /usr/bin/kubectl-krew
    
    1. Update the krew plugin
    # kubectl krew update
    WARNING: To be able to run kubectl plugins, you need to add
    the following to your ~/.bash_profile or ~/.bashrc:
    
        export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
    
    and restart your shell.
    
    Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git.
    Updated the local copy of plugin index.
    
    1. Update your shell:
    # echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
    
    1. Restart your session (exit and come back to the shell so the variables are loaded)
    2. Try oc krew list
    # oc krew list
    PLUGIN  VERSION
    
    1. List all the plugins that support ppc64le.
    # oc krew search  | grep -v 'unavailable on linux/ppc64le'
    NAME                            DESCRIPTION                                         INSTALLED
    allctx                          Run commands on contexts in your kubeconfig         no
    assert                          Assert Kubernetes resources                         no
    bulk-action                     Do bulk actions on Kubernetes resources.            no
    ...
    tmux-exec                       An exec multiplexer using Tmux                      no
    view-utilization                Shows cluster cpu and memory utilization            no
    
    1. Install a plugin
    # oc krew install view-utilization
    Updated the local copy of plugin index.
    Installing plugin: view-utilization
    Installed plugin: view-utilization
    \
     | Use this plugin:
     |      kubectl view-utilization
     | Documentation:
     |      https://github.com/etopeter/kubectl-view-utilization
     | Caveats:
     | \
     |  | This plugin needs the following programs:
     |  | * bash
     |  | * awk (gawk,mawk,awk)
     | /
    /
    WARNING: You installed plugin "view-utilization" from the krew-index plugin repository.
       These plugins are not audited for security by the Krew maintainers.
       Run them at your own risk.
    
    1. Use the plugin.
    # oc view-utilization
    Resource     Requests  %Requests      Limits  %Limits  Allocatable  Schedulable         Free
    CPU              7521         16        2400        5        45000        37479        37479
    Memory    33477885952         36  3774873600        4  92931489792  59453603840  59453603840
    

    Tip: There are many more plugins with ppc64le support and do not have the krew manifest updated.

    Thanks to PR 755 we have support for ppc64le.

    References

    https://github.com/kubernetes-sigs/krew/blob/v0.4.4/README.md

    https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz

  • Lessons and Notes for the Week

    Here are some lessons learned from the week (and shared with other):

    TIP: How to create a Power Workspace from the CLI

    This document outlines the steps necessary to create a PowerVS workspace.

    1. Login to IBM Cloud

    ibmcloud login --sso -r jp-osa -c 65b64c111111114bbfbd893c2c
    
    • -r jp-osa targets the region
    • -c 65b64c111111114bbfbd893c2c is the account that is being targetted.

    2. Install the Plugins for PowerVS and VPC.

    As a good practice install, the powervs and vpc plugins.

    ❯ ibmcloud plugin install power-iaas -f
    ❯ ibmcloud plugin install 'vpc-infrastructure' -f 
    

    3. Create a Power Systems workspace

    1. The two elements you should configure below are the PVS_REGION and the WORKSPACE_NAME. The rest will create a new Workspace.
    WORKSPACE_NAME=rdr-mac-osa-n1
    SERVICE_NAME=power-iaas
    RESOURCE_GROUP_NAME=my-resource-group
    SERVICE_PLAN_NAME=power-virtual-server-group
    PVS_REGION=osa21
    ❯ ibmcloud resource service-instance-create \
        "${WORKSPACE_NAME}" \
        "${SERVICE_NAME}" \
        "${SERVICE_PLAN_NAME}" \
        "${PVS_REGION}" \
        -g "${RESOURCE_GROUP_NAME}"
    Creating service instance rdr-mac-osa-n1 in resource group my-resource-group of account Power Account as pb@ibm.xyz...
    OK
    Service instance rdr-mac-osa-n1 was created.
                      
    Name:             rdr-mac-osa-n1
    ID:               crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460e8c2e4bbfbd893c2c:e3772ff7-48eb-4c81-9ee0-07cf0b5547a5::
    GUID:             e3772ff7-48eb-4c81-9ee0-07cf0b5547a5
    Location:         osa21
    State:            provisioning
    Type:             service_instance
    Sub Type:         
    Allow Cleanup:    false
    Locked:           false
    Created at:       2023-07-11T14:12:07Z
    Updated at:       2023-07-11T14:12:10Z
    Last Operation:             
                      Status    create in progress
                      Message   Started create instance operation

    Thus you are able to create a PowerVS Workspace from the CLI.

    Note: Flatcar Linux

    Flatcar Container Linux A community Linux distribution designed for container workloads, with high security and low maintenance

    https://www.flatcar.org/

    I learned about flatcar linux this week… exciting stuff.

    Demonstration: Tang Server Automation on PowerVM

    I work on the OpenShift Container Platform on Power Team which created a few video to discuss The powervm-tang-server-automation project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management. Thanks to Aditi Jadhav for presenting.

    The Overview

    The Demonstration

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/12/tang-server-automation-on-powervm

    New Blog: Installing OpenShift on IBM Power Virtual Servers with IPI

    My colleague Ashwin posted a new blog using IPI – This blog covers creating and destroying a “public” OpenShift cluster i.e., one which is accessible through the internet and the nodes of the cluster can access the internet. 

    https://community.ibm.com/community/user/powerdeveloper/blogs/ashwin-hendre/2023/07/10/powervs-ipi

  • Notes for the Week

    Here are some notes from the week:

    The Network Observability Operator is now released on IBM Power Systems.

    Network observability operator now available on Power.

    Network Observability (NETOBSERV) for IBM Power, little endian 1 for RHEL 9 ppc64le

    https://access.redhat.com/errata/RHSA-2023:3905?sc_cid=701600000006NHXAA2 https://community.ibm.com/community/user/powerdeveloper/discussion/network-observability-operator-now-available-on-power
  • Notes from the Week

    A few things I learned this week are:

    There is a cool session on IPI PowerVS for OpenShift called Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers Webinar.

    Did you know that you can run Red Hat OpenShift clusters on IBM Power servers? Maybe you do, but you don’t have Power hardware to try it out on, or you don’t have time to learn about OpenShift using the User-Provisioned method of installation. Let us introduce you to the Installer-Provisioned Installation method for OpenShift clusters, also called “an IPI install” on IBM Power Virtual Servers. IPI installs are much simpler than UPI installs, because, the installer itself has built-in logic that can provision each and every component your cluster needs.

    Join us on 27 July at 10 AM ET for this 1-hour live webinar to learn why the benefits of the IPI installation method goes well beyond installation and into the cluster lifecycle. We’ll show you how to deploy OpenShift IPI on Power Virtual Server with a live demo. And finally, we’ll share some ways that you can try it yourself. Please share any questions by clicking on the Reply button. If you have not done so already, register to join here and get your calendar invite.

    https://community.ibm.com/community/user/powerdeveloper/discussion/introducing-red-hat-openshift-installer-provisioned-installation-ipi-for-ibm-power-virtual-servers-webinar

    Of all the things, I finally stated using reverse-search in the shell. CTRL+R on the commandline. link or link

    The IBM Power Systems team announced a Tech Preview of Red Hat Ansible Automation Platform on IBM Power

    Continuing our journey to enable our clients’ automation needs, IBM is excited to announce the Technical Preview of Ansible Automation Platform running on IBM Power! Now, in addition to automating against IBM Power endpoints (e.g., AIX, IBM i, etc.), clients will be able to run Ansible Automation Platform components on IBM Power. In addition to IBM Power support for Ansible Automation Platform, Red Hat is also providing support for Ansible running on IBM Z Systems. Now, let’s dive into the specifics of what this entails.

    My team released a new version of the PowerVM Tang Server Automation to fix a routing problem:

    The powervm-tang-server-automation project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management.

    https://github.com/IBM/powervm-tang-server-automation/tree/v1.0.1

    The RH Ansible/IBM teams have released Red Hat Ansible Lightspeed with IBM Watson Code I’m excited to try it out and expand my Ansible usage.

    It’s great to see the Kernel Module Manager release 1.1 with Day-1 support through KMM

    The Kernel Module Management Operator manages out of tree kernel modules in Kubernetes.

    https://github.com/kubernetes-sigs/kernel-module-management

  • Tidbits – Terraform and Multiarch

    Here is a compendium of tidbits from the week:

    FYI: Terraform v1.5.0 is released. It’s probably worth updating. link to ppc64le build

    ❯ brew upgrade terraform

    There are new features:

    check blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. Adds a new strcontains function that checks whether a given string contains a given substring. (#33069)

    https://github.com/hashicorp/terraform/releases/tag/v1.5.0

    Blog: Build multi-architecture container images on GitHub Actions using native nodes

    My colleague, Yussuf, has posted a blog on using GitHub actions with Native Nodes. It’s to the point, and super helpful. link

    There are already a few good blogs[1][2][3] available in this group that demonstrates how to build multi-arch container images on GitHub Actions using Buildx. However, they are using QEMU which is a free and open-source emulator for running cross-builds. Using QEMU presents its own problems where the main point is the slowness which cannot match when we run the builds natively. Even there is no guarantee that the build will always succeed when we use low-level code in the project. These pain points forced us to use native nodes as part of the same Buildx workflow inside a GitHub Action. If you as well want to use native nodes to build your projects on multiple architectures including ppc64le then this article is for you.

    https://community.ibm.com/community/user/powerdeveloper/blogs/yussuf-shaikh/2023/06/14/workflow-using-buildx-remote-builders

    Tip: Custom Build of QEMU on Centos 9

    I had to do a cross-build of ARM64, so I used QEMU. It’s not necessarily straight-forward on CENTOS9, here are the steps I took:

    1. Connect to my machine.
    ❯ ssh cloud-user@<machine IP>
    1. Switch to Root
    ❯ sudo -s 
    1. Enable the Code Ready Builders repo
    ❯ dnf config-manager --set-enabled crb
    1. install a bunch of dependencies
    ❯ dnf install -y git make pip vim ninja-build gcc glib2-devel.x86_64 pixman.x86_64 libjpeg-devel giflib-devel pixman-devel cairo-devel pango-devel qemu-kvm edk2-aarch64
    1. Install Python dependencies
    ❯ pip install Sphinx sphinx-rtd-theme
    1. Pull the Qemu Code
    ❯ git clone https://git.qemu.org/git/qemu.git
    1. Change to QEMU
    ❯ cd qemu
    1. Apply the patch
    From 14920d35f053c8effd17a232b5144ef43465a85e Mon Sep 17 00:00:00 2001
    From: root <root@cross-build-pbastide.novalocal>
    Date: Tue, 20 Jun 2023 10:02:39 -0400
    Subject: [PATCH] test
    
    ---
     configure | 3 ++-
     1 file changed, 2 insertions(+), 1 deletion(-)
    
    diff --git a/configure b/configure
    index 01a53576a7..02cabb556b 100755
    --- a/configure
    +++ b/configure
    @@ -371,6 +371,7 @@ else
       # be the result of a missing compiler.
       targetos=bogus
     fi
    +targetos=linux
     
     # OS specific
     
    @@ -508,7 +509,7 @@ case "$cpu" in
       sparc64)
         CPU_CFLAGS="-m64 -mcpu=ultrasparc" ;;
     esac
    -
    +CPU_CFLAGS="-m64 -mcx16"
     check_py_version() {
         # We require python >= 3.7.
         # NB: a True python conditional creates a non-zero return code (Failure)
    -- 

    The above patch is generated from git format-patch -1 HEAD.

    1. Create the build directory
    ❯ mkdir -p build
    1. Configure the qemu build
    ❯ ./configure --target-list=aarch64-softmmu --enable-virtfs --enable-slirp
    1. Build with all your processors.
    ❯ make -j`nproc`
    1. The file should exist and be in your native architecture.
    ❯ file ./build/qemu-system-aarch64
    ./build/qemu-system-aarch64: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=1871423864de4428c4721b8e805304f0142bed6a, for GNU/Linux 3.2.0, with debug_info, not stripped
    1. Download Centos qcow2 from link
    ❯ curl -O https://cloud.centos.org/centos/9-stream/aarch64/images/CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2
    1. Update the .ssh/authorized_keys
    modprobe nbd
    qemu-nbd -c /dev/nbd0 CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2
    mount /dev/nbd0p1 /mnt
    mkdir -p /mnt/root/.ssh/
    1. Edit the keys to add your public key.
    vim /mnt/root/.ssh/authorized_keys
    1. Unmount the qcow2 mount
    umount /mnt
    qemu-nbd -d /dev/nbd0
    1. Start the QEMU vm.
    build/qemu-system-aarch64 -m 4G -M virt -cpu cortex-a57 \
      -bios /usr/share/edk2/aarch64/QEMU_EFI.fd \
      -drive if=none,file=CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2,id=hd0 \
      -device virtio-blk-device,drive=hd0 \
      -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp:127.0.0.1:5555-:22 \
      -nographic

    Finally, you’ll see:

    CentOS Stream 9
    Kernel 5.14.0-115.el9.aarch64 on an aarch64
    
    Activate the web console with: systemctl enable --now cockpit.socket
    
    cross-build-pbastide login: 

    It’s ready-to-go.

    References
    1. http://cdn.kernel.org/pub/linux/kernel/people/will/docs/qemu/qemu-arm64-howto.html
    2. https://fedoraproject.org/wiki/Architectures/AArch64/Install_with_QEMU
    3. https://wiki-archive.linaro.org/LEG/UEFIforQEMU
    4. https://packages.debian.org/experimental/qemu-efi-aarch64
    5. https://www.redhat.com/sysadmin/install-epel-linux
    6. https://wiki.qemu.org/Hosts/Linux
    7. https://github.com/Automattic/node-canvas/issues/1065#issuecomment-1278496824
    8. https://wiki.debian.org/Arm64Qemu