Category: OpenShift

  • Switching to use Kubernetes with Flannel on RHEL on P10

    I needed to switch from calico to flannel. Here is the recipe I followed to setting up Kubernetes 1.25.2 on a Power 10 using Flannel.

    Switching to use Kubernetes with Flannel on RHEL on P10

    1. Connect to both VMs (in split terminal)
    ssh root@control-1
    ssh root@worker-1
    
    1. Run Reset (acknowledge that you want to proceed)
    kubeadm reset
    
    1. Remove Calico
    rm /etc/cni/net.d/10-calico.conflist 
    rm /etc/cni/net.d/calico-kubeconfig
    iptables-save | grep -i cali | iptables -F
    iptables-save | grep -i cali | iptables -X 
    
    1. Initialize the cluster
    kubeadm init --cri-socket=unix:///var/run/crio/crio.sock --pod-network-cidr=192.168.0.0/16
    
    1. Setup kubeconfig
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    1. Add the plugins:
    curl -O https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-ppc64le-v1.1.1.tgz -L
    cp cni-plugins-linux-ppc64le-v1.1.1.tgz /opt/cni/bin
    cd /opt/cni/bin
    tar xvfz cni-plugins-linux-ppc64le-v1.1.1.tgz 
    chmod +x /opt/cni/bin/*
    cd ~
    systemctl restart crio kubelet
    
    1. Download https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    2. Edit the containers to point to the right instance, per the notes in the yaml to the ppc64le manifests

    3. Update net-conf.json

      net-conf.json: |
        {
          "Network": "192.168.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    
    1. Join the Cluster

    kubeadm join 1.1.1.1:6443 –token y004bg.sc65cp7fqqm7ladg
    –discovery-token-ca-cert-hash sha256:1c32dacdf9b934b7bbd6d13fde9312a35709e2f5849008acec8f597eb5a5dad9

    1. Add role to the workers
    kubectl label node worker-01.ocp-power.xyz node-role.kubernetes.io/worker=worker
    

    Ref: https://gist.github.com/rkaramandi/44c7cea91501e735ea99e356e9ae7883 Ref: https://www.buzzwrd.me/index.php/2022/02/16/calico-to-flannel-changing-kubernetes-cni-plugin/

  • Operator Doesn’t Install Successfully: How to restart it

    You see there is an issue with the unpacking your operator in the Operator Hub.

    Recreate the Job that does the download by recreating the job and subscription.

    1. Find the Job (per RH 6459071)
    $ oc get job -n openshift-marketplace -o json | jq -r '.items[] | select(.spec.template.spec.containers[].env[].value|contains ("myop")) | .metadata.name'

    2. Reset the download the Job

    for i in $(oc get job -n openshift-marketplace -o json | jq -r '.items[] | select(.spec.template.spec.containers[].env[].value|contains ("myop")) | .metadata.name'); do
      oc delete job $i -n openshift-marketplace; 
      oc delete configmap $i -n openshift-marketplace; 
    done

    3. Recreate your Subscription and you’ll see more details on the Job’s failure. Keep an eagle eye on the updates as it rolls over quickly.

    Message: rpc error: code = Unknown desc = pinging container registry registry.stage.redhat.io: Get "https://xyz/v2/": x509: certificate signed by unknown authority.

    You’ve seen how to restart the download/pull through job.

  • IBM Cloud cluster-api: building a CAPI image

    Per the IBM Cloud Kubernetes cluster-api provider, I followed the raw instructions with some amendments.

    Steps

    1. Provision an Ubuntu 20.04 image.

    2. Update the apt repository

    $ apt update
    
    1. Install the dependencies (more than what’s in the instructions)
    $ apt install qemu-kvm libvirt-daemon-system libvirt-clients virtinst cpu-checker libguestfs-tools libosinfo-bin make git unzip ansible python3-pip
    
    1. Clone the image-builder repo
    $ git clone https://github.com/kubernetes-sigs/image-builder.git
    
    1. Change to the capi image
    $ cd image-builder/images/capi
    
    1. Make the deps-raw to confirm everything is working.
    $ make deps-raw
    
    1. Create the ubuntu-2004 image.
    $ make build-qemu-ubuntu-2004
    

    Once complete you’ll see:

    ==> qemu: Running post-processor: custom-post-processor (type shell-local)
    ==> qemu (shell-local): Running local shell script: /tmp/packer-shell078717884
    Build 'qemu' finished after 12 minutes 8 seconds.
    
    ==> Wait completed after 12 minutes 8 seconds
    
    ==> Builds finished. The artifacts of successful builds are:
    --> qemu: VM files in directory: ./output/ubuntu-2004-kube-v1.22.9
    --> qemu: VM files in directory: ./output/ubuntu-2004-kube-v1.22.9
    
    1. Append the .qcow2 extension
    $ mv ./output/ubuntu-2004-kube-v1.22.9/ubuntu-2004-kube-v1.22.9 ./output/ubuntu-2004-kube-v1.22.9/ubuntu-2004-kube-v1.22.9.qcow2
    

    You can now upload the output to IBM Cloud Object Storage.

    A couple quick tips:

    • If you see any warnings, you can get advanced details using export PACKER_LOG=1 which puts out the full packer logging. see Packer
    • KVM module not found indicates you are running in a nested KVM, you’ll have to swap out of the VM and enable nested KVM. Fedora: Docs
    • Adding a VM to VPC is documented here Console: customImage
  • IBM Power Developer eXchange – An opportunity to connect likeminds

    There is a new IBM Power Developer eXchange where you can connect with the team I’m a part of to discuss OpenShift on Power or Kubernetes on Power. It’s an avenue to talk directly to the Subject Matter Experts in an open arena.

    Are you interested in furthering the development of open source applications on IBM Power? JOIN the IBM Power Developer eXchange to access numerous resources and expand your knowledge. https://ibm.biz/power-developer #PDeX #PowerSystems #Linux #OSS

  • Downloading pvsadm and getting VIP details

    pvsadm is an unsupported tool that helps with Power Virtual Server administration. I needed this detail for my CAPI tests.

    1. Get the latest download_url per StackOverflow
    $ curl -s https://api.github.com/repos/ppc64le-cloud/pvsadm/releases/latest | grep browser_download_url | cut -d '"' -f 4
    ...
    https://github.com/ppc64le-cloud/pvsadm/releases/download/v0.1.7/pvsadm-linux-ppc64le
    ...
    
    1. Download the pvsadm tool using the url from above.
    $ curl -o pvsadm -L https://github.com/ppc64le-cloud/pvsadm/releases/download/v0.1.7/pvsadm-linux-ppc64le
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100 21.4M  100 21.4M    0     0  34.9M      0 --:--:-- --:--:-- --:--:-- 34.9M
    
    1. Make the pvsadm tool executable
    $ chmod +x pvsadm
    
    1. Create the API Key at https://cloud.ibm.com/iam/apikeys

    2. On the terminal, export the IBMCLOUD_API_KEY.

    $ export IBMCLOUD_API_KEY=...REDACTED...      
    
    1. Grab the details of your network VIP using your service name and network.
    $ ./pvsadm get ports --instance-name demo --network topman-pub-net
    I0808 10:41:26.781531  125151 root.go:49] Using an API key from IBMCLOUD_API_KEY environment variable
    +-------------+----------------+----------------+-------------------+--------------------------------------+--------+
    | DESCRIPTION |   EXTERNALIP   |   IPADDRESS    |    MACADDRESS     |                PORTID                | STATUS |
    +-------------+----------------+----------------+-------------------+--------------------------------------+--------+
    |             | 1.1.1.1        | 2.2.2.2        | aa:24:7c:5d:cb:bb | aaa-bbb-ccc-ddd-eee                  | ACTIVE |
    +-------------+----------------+----------------+-------------------+--------------------------------------+--------+
    
  • PowerVS: Grabbing a VM Instance Console

    1. Create the API Key at https://cloud.ibm.com/iam/apikeys

    2. On the terminal, export the IBMCLOUD_API_KEY.

    $  export IBMCLOUD_API_KEY=...REDACTED...      
    
    1. Login to the IBM Cloud using the commandline tool https://www.ibm.com/cloud/cli
    $ ibmcloud login --apikey "${IBMCLOUD_API_KEY}" -r ca-tor
    API endpoint: https://cloud.ibm.com
    Authenticating...
    OK
    
    Targeted account Demo <-> 1012
    
    Targeted region ca-tor
    
    Users of 'ibmcloud login --vpc-cri' need to use this API to login until July 6, 2022: https://cloud.ibm.com/apidocs/vpc-metadata#create-iam-token
                          
    API endpoint:      https://cloud.ibm.com   
    Region:            ca-tor   
    User:              myuser@us.ibm.com   
    Account:           Demo <-> 1012   
    Resource group:    No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'   
    CF API endpoint:      
    Org:                  
    Space:  
    
    1. List your PowerVS services
    $ ibmcloud pi sl
    Listing services under account Demo as user myuser@us.ibm.com...
    ID                                                                                                                   Name   
    crn:v1:bluemix:public:power-iaas:mon01:a/999999c1f1c29460e8c2e4bb8888888:ADE123-8232-4a75-a9d4-0e1248fa30c6::     demo-service   
    
    1. Target your PowerVS instance
    $ ibmcloud pi st crn:v1:bluemix:public:power-iaas:mon01:a/999999c1f1c29460e8c2e4bb8888888:ADE123-8232-4a75-a9d4-0e1248fa30c6::    
    
    1. List the PowerVS Services’ VMs
    $ ibmcloud pi ins                                                  
    Listing instances under account Demo as user myuser@us.ibm.com...
    ID                                     Name                                   Path   
    12345-ae8f-494b-89f3-5678   control-plane-x       /pcloud/v1/cloud-instances/abc-def-ghi-jkl/pvm-instances/12345-ae8f-494b-89f3-5678   
    
    1. Create a Console for the VM instance you want to look at:
    $ ibmcloud pi ingc control-plane-x
    Getting console for instance control-plane-x under account Demo as user myuser@us.ibm.com...
                     
    Name          control-plane-x   
    Console URL   https://mon01-console.power-iaas.cloud.ibm.com/console/index.html?path=%3Ftoken%3not-real  
    
    1. Click on the Console URL, and view in your browser. it can be very helpful.

    I was able to diagnose that I had the wrong reference image.

  • Pause: Use this one, not that one.

    The Red Hat Ecosystem Catalog contains a supported version of the pause container. This container is based on ubi8. This best version of the Pause container to use for multiarch purposes.

    Don’t use docker.io/ibmcom/pause-ppc64le:3.1 when you have a multi-architecture version

    Steps

    1. Create a Pod yaml pointing to the Red Hat registry.
    $ cat << EOF > pod.yaml 
    kind: Pod
    apiVersion: v1
    metadata:
      name: demopod-1
      labels:
        demo: foo
    spec:
      containers:
      - name: pause
        image: registry.access.redhat.com/ubi8/pause:latest
    EOF
    
    1. Create the Pod
    $ oc apply -f pod.yaml 
    pod/demopod-1 created
    
    1. Check the Pod is running.
    $ oc get pods -l demo=foo
    NAME        READY   STATUS    RESTARTS   AGE
    demopod-1   1/1     Running   0          89s
    

    You have a Pause container running in OpenShift.

  • Identifying Kernel Memory Usage Culprits

    After suspecting the Kernel Memory is leaked, using slabtop --sort c where it shows high memory usage. You can use the following steps to confirm the memory usage culprit using slub_debug=U. (Thanks to ServerFault).

    1. Login to OpenShift
    $ oc login
    
    1. Check that you don’t already see 99-master-kargs-slub.
    $ oc get mc 99-master-kargs-slub
    
    1. Create the slub_debug=U kernel argument. Note, that it’s assigned to the master role.
    cat << EOF > 99-master-kargs-slub.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 99-master-kargs-slub
    spec:
      kernelArguments:
      - slub_debug=U
    EOF
    
    1. Create the Kernel Arguments Machine Config.
    $ oc apply -f 99-master-kargs-slub.yaml 
    machineconfig.machineconfiguration.openshift.io/99-master-kargs-slub created
    
    1. Wait until the master nodes are updated.
    $ oc wait mcp/master --for condition=updated --timeout=25m
    machineconfigpool.machineconfiguration.openshift.io/master condition met
    
    1. Confirm the node status as soon as it’s up, and list the master nodes.
    $ oc get nodes -l machineconfiguration.openshift.io/role=master
    NAME                                                    STATUS   ROLES    AGE   VERSION
    lon06-master-0.xip.io   Ready    master   30d   v1.23.5+3afdacb
    lon06-master-1.xip.io   Ready    master   30d   v1.23.5+3afdacb
    lon06-master-2.xip.io   Ready    master   30d   v1.23.5+3afdacb
    
    1. Connect to the master node and switch to the root user
    $ ssh core@lon06-master-0.xip.io
    sudo su - 
    
    1. Check the kmalloc-32 allocation
    $  cat /sys/kernel/slab/kmalloc-32/alloc_calls | sort -n  | tail -n 5
       4334 iomap_page_create+0x80/0x190 age=0/654342/2594020 pid=1-39569 cpus=0-7
       5655 selinux_sk_alloc_security+0x5c/0xd0 age=916/1870136/2594937 pid=0-39217 cpus=0-7
      41908 __kernfs_new_node+0x70/0x2d0 age=406911/2326294/2594938 pid=0-38398 cpus=0-7
    9969728 memcg_update_all_list_lrus+0x1bc/0x550 age=2564414/2567167/2594607 pid=1 cpus=0-7
    19861376 __list_lru_init+0x2b8/0x480 age=406870/2007921/2594449 pid=1-38406 cpus=0-7
    

    This points to memcg_update_all_list_lrus is using a lot of resources, which is currently fixed in a patch to the Linux Kernel.

    References

    1. https://serverfault.com/questions/1020241/debugging-kmalloc-64-slab-allocations-memory-leak
    2. http://www.jikos.cz/jikos/Kmalloc_Internals.html
    3. https://stackoverflow.com/questions/20079767/what-is-different-functions-malloc-and-kmalloc
    4. ServerFault: Debugging kmalloc-64 slab allocations / memory leak
    5. Kmalloc Internals: Exploring Linux Kernel Memory Allocation
    6. How I investigated memory leaks in Go using pprof on a large codebase
    7. Using Go 1.10 new trace features to debug an integration test
    8. Kernel Memory Leak Detector
    9. go-slab – slab allocator in go
    10. Red Hat Customer Support Portal: Interpreting /proc/meminfo and free output for Red Hat Enterprise Linux
    11. Red Hat Customer Support Portal: Determine how much memory is being used on the system
    12. Red Hat Customer Support Portal: Determine how much memory and what kind of objects the kernel is allocating
  • etcdctl hacks

    If you are running etcd, and need to check a few thing / see the status of your cluster, use the included hacks.

    Check the Endpoint Status and DB Size

    If you want to see some key details for your cluster, you can run the etcdctl:

    $ etcdctl -w table endpoint status
    +----------------------+------------------+-------+---------+-------+-------+---+---------+-----+--------+
    |        ENDPOINT      |        ID        |VERSION| DB SIZE |LEADER |LEARNER|RT |RAFTINDEX|RAFT APPLIED INDEX | ERRORS |
    +----------------------+------------------+-------+---------+-------+-------+---+---------+-----+--------+
    | https://1.2.3.3:2379 | e97ca8fed9268702 | 3.5.3 |  128 MB | false | false | 8 | 2766616 | 2766616 |   |
    | https://1.2.3.2:2379 | 82c75b78b63b558b | 3.5.3 |  127 MB |  true | false | 8 | 2766616 | 2766616 |   |
    | https://1.2.3.1:2379 | afa5e0b54513b116 | 3.5.3 |  134 MB | false | false | 8 | 2766616 | 2766616 |   |
    +----------------------+------------------+-------+---------+-------+-------+---+---------+-----+--------+
    

    Check the Revision and Count for All the Keys

    If you need to see how many keys you have, you can execute the following command, and you get 5061 keys.

     $ etcdctl get / --prefix --count-only=true --write-out=fields
    "ClusterID" : 1232296676125618033
    "MemberID" : 9423601319307597195
    "Revision" : 2712993
    "RaftTerm" : 8
    "More" : false
    "Count" : 5061

    Check the 5 Highest ModRevisions for a Key/Value

    If you need to find the Highest utilized (Updated), keys you can use this hack:

    $ for KEY in $(etcdctl get / --prefix --keys-only=true | grep -v leases)
    do 
        if [ ! -z "${KEY}" ]
        then 
            COUNT=$(etcdctl get ${KEY} --prefix --write-out=fields | grep \"ModRevision\" | awk '{print $NF}')
            echo "${COUNT} ${KEY}"
        fi
    done | sort -nr | head -n 5
    
    2732087 /kubernetes.io/validatingwebhookconfigurations/performance-addon-operator
    2731785 /kubernetes.io/resourcequotas/openshift-host-network/host-network-namespace-quotas
    2731753 /kubernetes.io/validatingwebhookconfigurations/multus.openshift.io
    2731549 /kubernetes.io/network.openshift.io/clusternetworks/default
    2731478 /kubernetes.io/configmaps/openshift-service-ca/service-ca-controller-lock

    Calculating the Theoretical Memory Pressure

    Per the website, you can calculate memory pressure as:

    The theoretical memory consumption of watch can be approximated with the formula: memory = c1 * number_of_conn + c2 * avg_number_of_stream_per_conn + c3 * avg_number_of_watch_stream

    etcd benchmark site

    Command to be added in the future

    References

  • Analyzing Memory Hacks with Kubernetes and OpenShift and Linux

    I’ve had to run a number of queries for an OpenShift Cluster, Kubernetes and Linux recently, and here are my helpful queries:

    Node Memory

    If you want to check the Memory on each Node in your OpenShift Cluster, you can run the following oc command:

    $ oc get nodes -o json | jq -r '.items[] | "\(.metadata.name) - \(.status.capacity.memory)"'
    master-0.ocp-power.xyz - 16652928Ki
    master-1.ocp-power.xyz - 16652928Ki
    master-2.ocp-power.xyz - 16652928Ki
    worker-0.ocp-power.xyz - 16652928Ki
    worker-1.ocp-power.xyz - 16652928Ki

    Node Memory Pressure

    If you want to check the Memory usage on each Node in your OpenShift Cluster, you can run the following oc command:

    Memory Pressure Per node:
    $ oc adm top node --show-capacity=true
    NAME  CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%  
    master-0.ocp-power.xyz    1894m 15272Mi 93%
    master-1.ocp-power.xyz    1037m 8926Mi  54%
    master-2.ocp-power.xyz    1563m 10953Mi 67%
    worker-0.ocp-power.xyz    1523m 6781Mi  41%
    worker-1.ocp-power.xyz    933m  6746Mi  41%

    Top Memory Usage per Pods

    If you want to check the Top Memory Usage per Pod, you can run the following command:

    $ oc adm top pod -A --sort-by='memory'
     NAMESPACE  NAME CPU(cores)   MEMORY(bytes)  
    15% - openshift-kube-apiserver   kube-apiserver-master-2.ocp-power.xyz   386m 2452Mi 
    11% - openshift-kube-apiserver   kube-apiserver-master-0.ocp-power.xyz   225m 1924Mi 
    10% - openshift-kube-apiserver   kube-apiserver-master-1.ocp-power.xyz   239m 1720Mi 

    List Container Memory Details per Pod

    If you want to see the breakdown of Memory usage, you can use the following kubectl command:

    $ kubectl top pod kube-apiserver-master-2.ocp-power.xyz -n openshift-kube-apiserver --containers
    POD  NAME  CPU(cores)   MEMORY(bytes)  
    kube-apiserver-master-2.ocp-power.xyz   POD   0m   0Mi
    kube-apiserver-master-2.ocp-power.xyz   kube-apiserver514m 2232Mi 
    kube-apiserver-master-2.ocp-power.xyz   kube-apiserver-cert-regeneration-controller   25m  51Mi   
    kube-apiserver-master-2.ocp-power.xyz   kube-apiserver-cert-syncer0m   28Mi   
    kube-apiserver-master-2.ocp-power.xyz   kube-apiserver-check-endpoints7m   41Mi   
    kube-apiserver-master-2.ocp-power.xyz   kube-apiserver-insecure-readyz0m   16Mi  

    Checking the High and Low Memory Limits on a Linux Host

    If you want to check the memory usage on a host in Gigabytes (including the max allocation), you can run the free command:

    $ free -g -h -l
                  total        used        free      shared  buff/cache   available
    Mem:           15Gi        10Gi       348Mi       165Mi       5.2Gi       4.8Gi
    Low:           15Gi        15Gi       348Mi
    High:            0B          0B          0B
    Swap:            0B          0B          0B

    Use Observe > Metrics

    If you have Metrics enabled, login to your OpenShift Dashboard, and click on Observe > Metrics and use one of the following

    sum(node_memory_MemAvailable_bytes) by (instance) / 1024 / 1024 / 1024
    :node_memory_MemAvailable_bytes:sum

    Check Memory Usage on the CoreOS Nodes

    If you want to check the memory details on each CoreOS node, you can use the following hack to SSH in and output the details.

    $ for HN in $(oc get nodes -o json | jq -r '.items[].status.addresses[] | select(.type=="Hostname").address')
    do
       echo HOSTNAME: $HN
       ssh core@$HN 'cat /proc/meminfo'
    done
    
    HOSTNAME: master-0.ocp-power-xyz
    MemTotal:       16652928 kB
    MemFree:          265472 kB
    MemAvailable:     933248 kB
    Buffers:             384 kB
    Cached:          1387584 kB
    SwapCached:            0 kB
    Active:           688000 kB
    Inactive:        7832192 kB
    Active(anon):     120448 kB
    Inactive(anon):  7307392 kB

    Check Top in Batch on the CoreOS Nodes

    If you want to check the Memory using Top (batch) on each CoreOS node, you can use the following hack to SSH in and output the details: (refer to link)

    for HN in $(oc get nodes -o json | jq -r '.items[].status.addresses[] | select(.type=="Hostname").address')
    do
       echo
       echo HOSTNAME: $HN
       ssh core@$HN 'top -b -d 5 -n 1 -E g -o +%MEM'
       sleep 10
    done
    
    HOSTNAME: master-0.ocp-power.xyz
    top - 23:41:40 up 7 days, 11:58,  0 users,  load average: 1.60, 2.24, 2.74
    Tasks: 390 total,   1 running, 389 sleeping,   0 stopped,   0 zombie
    %Cpu(s): 48.1 us, 10.6 sy,  0.0 ni, 39.4 id,  1.2 wa,  0.0 hi,  0.6 si,  0.0 st
    GiB Mem :     15.9 total,      0.2 free,     14.3 used,      1.3 buff/cache
    GiB Swap:      0.0 total,      0.0 free,      0.0 used.      0.8 avail Mem
    
        PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
    1018800 root      20   0 2661824   1.7g  21696 S   0.0  10.5   2896:40 kube-ap+
      42247 root      20   0   11.3g   1.1g  14272 S  16.7   7.1   1483:55 etcd
       1704 root      20   0 3984384 220480  31680 S  11.1   1.3   3289:24 kubelet

    Pod Metrics (Thanks StackOverflow)

    If you want to get raw cpu and memory metrics from OpenShift, you can run the following:

    kubectl -n openshift-etcd get --raw /apis/metrics.k8s.io/v1beta1/namespaces/openshift-etcd/pods/etcd-master-0.ocp-power.xyz | jq
    {
      "kind": "PodMetrics",
      "apiVersion": "metrics.k8s.io/v1beta1",
      "metadata": {
        "name": "etcd-master-0.ocp-power.xyz",
        "namespace": "openshift-etcd",
        "creationTimestamp": "2022-06-25T00:10:58Z",
        "labels": {
          "app": "etcd",
          "etcd": "true",
          "k8s-app": "etcd",
          "revision": "7"
        }
      },
      "timestamp": "2022-06-25T00:10:58Z",
      "window": "5m0s",
      "containers": [
        {
          "name": "etcd",
          "usage": {
            "cpu": "142m",
            "memory": "1197632Ki"
          }
        },
        {
          "name": "etcd-health-monitor",
          "usage": {
            "cpu": "42m",
            "memory": "34240Ki"
          }
        },
        {
          "name": "etcd-metrics",
          "usage": {
            "cpu": "28m",
            "memory": "17920Ki"
          }
        },
        {
          "name": "etcd-readyz",
          "usage": {
            "cpu": "8m",
            "memory": "31680Ki"
          }
        },
        {
          "name": "etcdctl",
          "usage": {
            "cpu": "0",
            "memory": "3200Ki"
          }
        },
        {
          "name": "POD",
          "usage": {
            "cpu": "0",
            "memory": "0"
          }
        }
      ]
    }

    List Pods on a Node

    The following lists every Pod on a Node and outputs the namespace and pod name:

    $ oc get pods -A -o json --field-selector spec.nodeName=worker-0.ocp-power.xyz | jq -r '.items[] | "\(.metadata.namespace) / \(.metadata.name)"'
    openshift-cluster-node-tuning-operator / tuned-fdpl4
    openshift-dns / dns-default-6vdcw