Blog

  • Downloading oc-compliance on ppc64le

    My team is working with the OpenShift Container Platforms Optional Operator – Compliance Operator. The Compliance Operator has a supporting tool oc-compliance.

    One tricky element was downloading the oc-compliance plugin and I’ve documented the steps here to help

    Steps

    1. Navigate to https://console.redhat.com/openshift/downloads#tool-pull-secret

    If Prompted, Login with your Red Hat Network id.

    1. Under Tokens, select Pull secret, then click Download

    2. Copy the pull-secret to your working directory

    3. Make the .local/bin directory to drop the plugin.

    $ mkdir -p ~/.local/bin
    
    1. Run the oc-compliance-rhel8 container image.
    $ podman run --authfile pull-secret --rm -v ~/.local/bin:/mnt/out:Z --arch ppc64le registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/
    Trying to pull registry.redhat.io/compliance/oc-compliance-rhel8:stable...
    Getting image source signatures
    Checking if image destination supports signatures
    Copying blob 847f634e7f1e done  
    Copying blob 7643f185b5d8 done  
    Copying blob d6050ae37df3 done  
    Copying config 2f0afdf522 done  
    Writing manifest to image destination
    Storing signatures
    
    1. Check the file is ppc64le
    $ file ~/.local/bin/oc-compliance 
    /root/.local/bin/oc-compliance: ELF 64-bit LSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), dynamically linked, interpreter /lib64/ld64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=d5bff511ee48b6cbc6afce6420e780da2f0eacdc, not stripped
    

    If it doesn’t work, you can always verify your architecture of the machine podman is running on:

    $ arch
    ppc64le
    

    It should say ppc64le.

    You’ve seen how to download the ppc64le build.

    References

  • OpenShift on Power Blogs…

    Recently, I started a leadership position on a new squad focused on OpenShift on IBM Power Systems. Two of my teammates have posted blogs about their work:

    1. Configuring Seccomp Profile on OpenShift Container Platform for Security and Compliance on Power from Aditi covers the ins and outs of configuring the seccomp profile, and tells you why you should care and how you can configure it with your workload.
    2. Encrypting etcd data on Power from Gaurav covers encrypting the etcd data store on OpenShift and how to go through some common operations related to etcd management when it’s encrypted.
    3. Encrypting OpenShift Container Platform Disks on Power Systems from Gaurav covers encryption concepts, how to setup an external tang cluster on IBM PowerVS, how to setup a cluster on IBM PowerVS and how to confirm the encrypted disk setup.
    4. OpenShift TLS Security Profiles on IBM Power from Gaurav covers the setting up of TLS inside OpenShift and verifying the settings.
    5. Lessons Learned using Security Context Constraints OpenShift from Aditi covers key things she learned from using Security Context Constraints
    6. Securing NFS Attached Storage Notes from Aditi covers restricting the use of NFS mounts/securing the attached storage.
    7. Using the Compliance Operator to support PCI-DSS on OpenShift Container Platform on Power from Aditi dives into the PCI-DSS profile with the Compliance Operator.
    8. Configuring a PCI-DSS compliant OpenShift Container Platform cluster on IBM Power from Gaurav dives into configuring a compliance cluster with recipes to enable proper configuration.

    I hope you found these as useful as I did. Best wishes, PB

  • Tweak for GoLang PowerPC Build

    As many know, Go is a designed to build architecture and operating system specific binaries. These architecture and operating system specific binaries are called a target. One can target GOARCH=ppc64le GOOS=linux go build to build for the specific OS. There is a nice little tweak which considers the architectures version and optimizes the selection of the ASM (assembler code) uses when building the code.

    To use the Power Architecture ppc64le for a specific target, you can use GOPPC64:

    1. power10 – runs with Power 10 only.
    2. power9 – runs with Power 9 and Power 10.
    3. power8 (the default) and runs with 8,9,10.

    For example the command is GOARCH=ppc64le GOOS=linux GOPPC64=power9 go build

    This may help with some various results.

    References

  • Using Go Memory and Processor Limits with Kubernetes DownwardAPI

    As many know, Go is a designed for performance with an emphasis on memory management and garbage collection. When used within cgroups with Kubernetes and Red Hat OpenShift Go maximizes for the available memory on the node and the available processors. This approach, as noted by Uber’s automaxprocs, a shared system can see slightly degraded performance when allocated CPUs are not limited to the actually available CPUs (e.g., a prescribed limit).

    Using environment variables, Go lets a user control Memory limits and processor limits.

    GOMEMLIMIT limits the Go heap and all other runtime memory runtime/debug.SetMemoryLimit

    GOMAXPROCS limits the number of operating system threads that can execute user-level Go code simultaneously.

    There is an opensource go packages to control GOMAXPROCS automatically when used with cgroups called automaxproces.

    In OpenShift/Kubernetes, there is a concept of spec.containers[].resources.limits for cpus and memory, as described in the article Resource Management for Pods and Containers.

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
      - name: my-container
        image: myimage
        resources:
          limits:
            memory: "128Mi"
            cpu: 2
    

    To facilitate sharing these details to a container Kubernetes provides the downwardAPI. The downwardAPI provides the details as an environment variableor a file.

    To see how this works in combination:

    1. Create a yaml test.yaml with resources.limits and env.valueFrom.fieldRef.fieldPath set to the GOMEMLIMIT and GOMAXPROCS value you want.
    kind: Namespace
    apiVersion: v1
    metadata:
      name: demo
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: dapi-go-limits
      namespace: demo
    spec:
      containers:
        - name: test-container
          image: registry.access.redhat.com/ubi8/pause
          resources:
            limits:
              memory: 128Mi
              cpu: "2"
          command:
            - sh
            - '-c'
          args:
            - >-
              while true; do echo -en '\n'; printenv GOMEMLIMIT; printenv GOMAXPROCS
              sleep 10; done;
          env:
            - name: GOMEMLIMIT
              valueFrom:
                resourceFieldRef:
                  containerName: test-container
                  resource: limits.memory
            - name: GOMAXPROCS
              valueFrom:
                resourceFieldRef:
                  containerName: test-container
                  resource: limits.cpu
      restartPolicy: Never
    
    1. Apply the file to the oc apply -f test.yaml

    2. Check the logs file

    $ oc -n demo logs pod/dapi-go-limits
    134217728
    2
    
    1. Delete the pod when you are done with the demonstration
    $ oc -n demo delete pod/dapi-go-limits
    pod "dapi-go-limits" deleted
    

    There is a clear / easy way to control go runtime configuration.

    Reference

    • https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
    • https://stackoverflow.com/questions/17853831/what-is-the-gomaxprocs-default-value
    • https://github.com/uber-go/automaxprocs#performance
    • https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
  • Linking Quay to OpenShift and you hit `x509: certificate signed by unknown authority`

    If you see the following error when you link OpenShift and self-signed Quay registry… I’ve got the steps for you…

    Events:
      Type     Reason          Age                From               Message
      ----     ------          ----               ----               -------
      Normal   Scheduled       38s                default-scheduler  Successfully assigned openshift-marketplace/my-operator-catalog-29vl8 to worker.output.xyz
      Normal   AddedInterface  36s                multus             Add eth0 [10.131.1.5/23] from openshift-sdn
      Normal   Pulling         23s (x2 over 36s)  kubelet            Pulling image "quay-demo.host.xyz:8443/repository/ocp/openshift4_12_ppc64le"
      Warning  Failed          22s (x2 over 35s)  kubelet            Failed to pull image "quay-demo.host.xyz:8443/repository/ocp/openshift4_12_ppc64le": rpc error: code = Unknown desc = pinging container registry quay-demo.host.xyz:8443: Get "https://quay-demo.host.xyz:8443/v2/": x509: certificate signed by unknown authority
      Warning  Failed          22s (x2 over 35s)  kubelet            Error: ErrImagePull
      Normal   BackOff         8s (x2 over 35s)   kubelet            Back-off pulling image "quay-demo.host.xyz:8443/repository/ocp/openshift4_12_ppc64le"
      Warning  Failed          8s (x2 over 35s)   kubelet            Error: ImagePullBackOff
    

    Steps

    1. Set the hostname to your registry hostname
    export REGISTRY_HOSTNAME=quay-demo.host.xyz
    export REGISTRY_PORT=8443
    
    1. Extract all the ca certs
    echo "" | openssl s_client -showcerts -prexit -connect "${REGISTRY_HOSTNAME}:${REGISTRY_PORT}" 2> /dev/null | sed -n -e '/BEGIN CERTIFICATE/,/END CERTIFICATE/ p' > tmp.crt
    
    1. Display the cert to verify you see the Issuer
    # openssl x509 -in tmp.crt -text | grep Issuer
            Issuer: C = US, ST = VA, L = New York, O = Quay, OU = Division, CN = quay-demo.host.xyz
    
    1. Create the configmap in the openshift-config namespace
    # oc create configmap registry-quay -n openshift-config --from-file="${REGISTRY_HOSTNAME}..${REGISTRY_PORT}=$(pwd)/tmp.crt"
    configmap/registry-quay created
    
    1. Add anadditionalTrustedCA to the the cluster image config.
    # oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-quay"}}}' --type=merge
    image.config.openshift.io/cluster patched
    
    1. Verify you config is updated
    # oc get image.config.openshift.io/cluster -o yaml
    apiVersion: config.openshift.io/v1
    kind: Image
    metadata:
      annotations:
        include.release.openshift.io/ibm-cloud-managed: "true"
        include.release.openshift.io/self-managed-high-availability: "true"
        include.release.openshift.io/single-node-developer: "true"
        release.openshift.io/create-only: "true"
      creationTimestamp: "2022-10-20T15:35:08Z"
      generation: 2
      name: cluster
      ownerReferences:
      - apiVersion: config.openshift.io/v1
        kind: ClusterVersion
        name: version
        uid: a3df97ca-73ff-4a72-93b1-f3ef7d51e329
      resourceVersion: "6299552"
      uid: f7e56517-486d-4530-8e14-16ef0deed462
    spec:
      additionalTrustedCA:
        name: registry-quay
    status:
      internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
    
    1. Check your pod that failed to connect, and you should see that it now succeeds.

    Reference

  • Use Qemu to Build S390x images

    Tips to build Qemu S390x images

    1. Connect to a build machine

    ssh root@ip

    1. Clone the operator

    “git clone https://github.com/prb112/operator.git“`

    1. Install qemu and buildah and podman-docker

    yum install -y qemu-kvm buildah podman-docker

    /usr/bin/docker run --rm --privileged tonistiigi/binfmt:latest --install all
    Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
    ✔ docker.io/tonistiigi/binfmt:latest
    Trying to pull docker.io/tonistiigi/binfmt:latest...
    Getting image source signatures
    Copying blob e9c608ddc3cb done  
    Copying blob 8d4d64c318a5 done  
    Copying config 354472a378 done  
    Writing manifest to image destination
    Storing signatures
    installing: arm64 OK
    installing: arm OK
    installing: ppc64le OK
    installing: mips64 OK
    installing: riscv64 OK
    installing: mips64le OK
    installing: s390x OK
    {
      "supported": [
        "linux/amd64",
        "linux/arm64",
        "linux/riscv64",
        "linux/ppc64le",
        "linux/s390x",
        "linux/386",
        "linux/mips64le",
        "linux/mips64",
        "linux/arm/v7",
        "linux/arm/v6"
      ],
      "emulators": [
        "kshcomp",
        "qemu-aarch64",
        "qemu-arm",
        "qemu-mips64",
        "qemu-mips64el",
        "qemu-ppc64le",
        "qemu-riscv64",
        "qemu-s390x"
      ]
    }
    

    /usr/bin/buildah bud --arch s390x -f $(pwd)/build/Dockerfile --format docker --tls-verify=true -t op:v0.1.1-linux-s390x $(pwd)/

  • Setting up nfs-provisioner on OpenShift on Power Systems

    Here are my notes for setting up the SIG’s nfs-provisioner. You should follow these directions to setup the nfs-provisioner kubernetes-sigs/nfs-subdir-external-provisioner.

    1. Clone the nfs-subdir-external-provisioner
    git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git
    
    1. If you haven’t already, you may need to create the nfs-provisioner namespace.

    a. Create the ns.yaml

    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        kubernetes.io/metadata.name: nfs-provisioner
        pod-security.kubernetes.io/enforce: privileged
        pod-security.kubernetes.io/enforce-version: v1.24
      name: nfs-provisioner
    

    b. create the namespace

    oc apply -f ns.yaml
    

    c. annotate the namespace

    oc label namespace/nfs-provisioner security.openshift.io/scc.podSecurityLabelSync=false --overwrite=true
    oc label namespace/nfs-provisioner pod-security.kubernetes.io/enforce=privileged --overwrite=true
    oc label namespace/nfs-provisioner pod-security.kubernetes.io/audit=privileged --overwrite=true
    oc label namespace/nfs-provisioner pod-security.kubernetes.io/warn=privileged --overwrite=true
    
    1. Change to the deploy/ directory
    cd nfs-subdir-external-provisioner/deploy
    
    1. Update the namespace default to nfs-provisioner for deployment.yaml

    2. On the Bastion server, look at ocp4-helpernode/helpernode_vars.yaml for the helper.ipaddr value.

    helper:
      networkifacename: env3
      name: "bastion-0"
      ipaddr: "193.168.200.15"
    
    1. Update the deployment with the NFS_SERVER using the helper.ipaddr and the NFS_PATH /export. It should look like the following:
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: k8s-sigs.io/nfs-subdir-external-provisioner
                - name: NFS_SERVER
                  value: 193.168.200.15
                - name: NFS_PATH
                  value: /export
          volumes:
            - name: nfs-client-root
              nfs:
                server: 193.168.200.15
                path: /export
    

    v4.0.2 supports ppc64le.

    Be sure to remove the namespace: default

    1. Create the deployment
    oc apply -f deployment.yaml
    deployment.apps/nfs-client-provisioner created
    
    1. Get the pods
    oc get pods
    NAME                                     READY   STATUS    RESTARTS   AGE
    nfs-client-provisioner-b8764c6bb-mjnq9   1/1     Running   0          36s
    
    1. Setup Authorization
    NAMESPACE=`oc project -q`
    sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./rbac.yaml 
    oc create -f rbac.yaml
    oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner
    
    1. Create the storage class file
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: nfs-client
    provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
    parameters:
      pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
      onDelete: delete
    
    1. Apply the StorageClass
    oc apply -f sc.yml
    
    1. Then you can deploy the PV and PVC files/6_EvictPodsWithPVC_dp.yml

    References

  • openshift-install-power – quick notes

    FYI: openshift-install-power – this is a small recipe for deploying the latest code with the UPI from master branch @ my repo

    git clone https://github.com/ocp-power-automation/openshift-install-power.git
    chmod +x openshift-install-powervs
    export IBMCLOUD_API_KEY="<<redacted>>"
    export RELEASE_VER=latest
    export ARTIFACTS_VERSION="master"
    export ARTIFACTS_REPO="<<MY REPO>>"
    ./openshift-install-powervs setup
    ./openshift-install-powervs create -var-file mon01-20220930.tfvars -flavor small -trace
    

    This also recover from errors in ocp4-upi-powervs/terraform

  • Topology Manager and OpenShift/Kubernetes

    I recently had to work with the Kubernetes Topology Manager and OpenShift. Here is a braindump on Topology Manager:

    If the Topology ManagerFeature Gate is enabled, then any active HintProviders are registered to the TopologyManager.

    If the CPU Manager and feature gate are enabled, then the CPU Manager can be used to help workloads which are sensitive to CPU throttling, context switches, cache misses, require hyperthreads on same physical CPU core, low latency, and benefit from shared processor resources. The manager has two policies none and static which registers a NOP provider or statically locks the container to a set of CPUs.

    If the Memory Manager and feature gate are enabled, then the MemoryManager can be used to process independently of the CPU Manager – e.g. allocate HugePages or guarnteed memory.

    If Device Plugins are enabled, then it can be turned on to allocate Devices next to NUMA node resources (e.g., SR-IOV NICs). This may be used independent of the typical CPU/Memory management for GPUs and other machine devices.

    Generally, these are all used together to generate a BitMask that admits a pod using a best-effort, restricted, or single-numa-node policy.

    An important limitation is the Maximum Number of NUMA nodes is hard-coded to 8. When there are more than eight NUMA nodes, it’ll error out when assigning to the topology. The reason for this is related to state explosion and computational complexity.

    1. Check the worker nodes CPU if the NUMA returns 1, it’s a single NUMA node. If it returns 2 or more, it’s multiple NUMA nodes.
    sh-4.4# lscpu | grep 'NUMA node(s)'
    NUMA node(s):        1
    

    The kubernetes/enhancements repo contains great detail on the flows and weaknesses of the TopologyManager.

    To enable the Topology Manager, one uses Feature Gates:

    And OpenShift prefers the FeatureSet LatencySensitive

    1. Via FeatureGate
    $ oc patch featuregate cluster -p '{"spec": {"featureSet": "LatencySensitive"}}' --type merge
    

    Which turns on the basic TopologyManager /etc/kubernetes/kubelet.conf

      "featureGates": {
        "APIPriorityAndFairness": true,
        "CSIMigrationAzureFile": false,
        "CSIMigrationvSphere": false,
        "DownwardAPIHugePages": true,
        "RotateKubeletServerCertificate": true,
        "TopologyManager": true
      },
    
    1. Create a custom KubeletConfig, this allows targeted TopologyManager feature enablement.

    file: cpumanager-kubeletconfig.yaml

    apiVersion: machineconfiguration.openshift.io/v1
    kind: KubeletConfig
    metadata:
      name: cpumanager-enabled
    spec:
      machineConfigPoolSelector:
        matchLabels:
          custom-kubelet: cpumanager-enabled
      kubeletConfig:
         cpuManagerPolicy: static 
         cpuManagerReconcilePeriod: 5s 
    
    $ oc create -f cpumanager-kubeletconfig.yaml
    

    Net: They can be used independent of each other. They should be turned on at the same time to maximize the benefits.

    There are some examples and test cases out there for Kubernetes and OpenShift

    1. Red Hat Sys Engineering Team Test cases for Performance Addon Operator which is now the Cluster Node Tuning Operator– These are the clearest tests, which apply directly to the Topology Manager.
    2. Kube Test Cases

    This is one of the best examples k8stopologyawareschedwg/sample-device-plugin.

    Tools to know about

    1. GitHub: numalign (amd64) – you can download this in the releases. In this fork prb112/numalign I added ppc64le to the build
    2. numactl and numastat are superbly helpful to see the topology spread on a node link to a handy pdf on numa I’ve been starting up a fedora container with numactl and numastat installed

    Final note, I had written down that fedora is a great combination with taskset and numactl if you copy in the binaries. I think I used Fedora 35/36 as a container. link

    Yes. I built a Hugepages hungry container Hugepages. I also looked at hugepages_tests.go and the test plan.

    When it came down to it, I used my hunger container with the example.

    I hope this helps others as they start to work with Topology Manager.

    References

    Red Hat

    1. Red Hat Topology Aware Scheduling in Kubernetes Part 1: The High Level Business Case
    2. Red Hat Topology Awareness in Kubernetes Part 2: Don’t we already have a Topology Manager?

    OpenShift

    1. OpenShift 4.11: Using the Topology Manager
    2. OpenShift 4.11: Using device plug-ins to access external resources with pods
    3. OpenShift 4.11: Using Device Manager to make devices available to nodes Device Manager
    4. OpenShift 4.11: About Single Root I/O Virtualization (SR-IOV) hardware networks – Device Manager
    5. OpenShift 4.11: Adding a pod to an SR-IOV additional network
    6. OpenShift 4.11: Using CPU Manager CPU Manager

    Kubernetes

    1. Kubernetes: Topology Manager Blog
    2. Feature Highlight: CPU Manager
    3. Feature: Utlizing the NUMA-aware Memory Manager

    Kubernetes Enhancement

    1. KEP-693: Node Topology Manager e2e tests: Link
    2. KEP-2625: CPU Manager e2e tests: Link
    3. KEP-1769: Memory Manager Source: Link PR: Link
  • Kube 1.25.2 on RHEL9 P10

    1. Update Hosts
    9.0.90.0 ocp4daily70.ibm.com
    9.0.90.1 ocp4daily98.ibm.com
    
    1. Setup the Subscription Manager
    set +o history
    export rhel_subscription_username="rhn-ee-xxxxx"
    export rhel_subscription_password="xxxxx"
    set -o history
    subscription-manager register --username="${rhel_subscription_username}" --password="${rhel_subscription_password}"
    subscription-manager refresh
    
    1. Disable the swap
    sudo swapoff -a
    
    1. Install the libraries
    yum install -y podman podman-remote socat runc
    
    1. Install the cri-o package
    rpm -ivh https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.25:/1.25.0/Fedora_36/ppc64le/cri-o-1.25.0-2.1.fc36.ppc64le.rpm
    
    1. Enable podman socket
    systemctl enable --now podman.socket
    
    1. Enable crio service
    sudo systemctl enable crio
    sudo systemctl start crio
    
    1. Disable selinux
    sudo setenforce 0
    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    1. Download Release
    export RELEASE=1.25
    sudo curl -L --remote-name-all https://dl.k8s.io/v1.25.2/bin/linux/ppc64le/{kubeadm,kubelet,kubectl}
    sudo chmod +x {kubeadm,kubelet,kubectl}
    
    1. Move files to /bin
    mv kube* /bin/
    
    1. Add kubelet.service
    RELEASE_VERSION="v0.14.0"
    curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
    sudo mkdir -p /etc/systemd/system/kubelet.service.d
    curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    1. Enable and start service
    systemctl enable --now kubelet
    systemctl start kubelet
    
    1. Update the cgroup settings
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF
    
    1. Load the modules
    sudo modprobe overlay
    sudo modprobe br_netfilter
    
    1. sysctl params required by setup, params persist across reboots
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    EOF
    
    1. Apply sysctl params without reboot
    sudo sysctl --system
    
    1. Install libnetfilter and conntrack-tools
    rpm -ivh http://mirror.stream.centos.org/9-stream/AppStream/ppc64le/os/Packages/libnetfilter_queue-1.0.5-1.el9.ppc64le.rpm
    rpm -ivh http://mirror.stream.centos.org/9-stream/AppStream/ppc64le/os/Packages/libnetfilter_cttimeout-1.0.0-19.el9.ppc64le.rpm
    rpm -ivh http://mirror.stream.centos.org/9-stream/AppStream/ppc64le/os/Packages/libnetfilter_cthelper-1.0.0-22.el9.ppc64le.rpm
    rpm -ivh http://mirror.stream.centos.org/9-stream/AppStream/ppc64le/os/Packages/conntrack-tools-1.4.5-15.el9.ppc64le.rpm
    
    1. Copy Kubelet
    cp /bin/kubelet /kubelet
    
    1. Edit crio.conf
    /etc/crio/crio.conf
    
    conmon_cgroup = "pod"
    cgroup_manager = "systemd"
    
    1. Add the plugins:
    curl -O https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-ppc64le-v1.1.1.tgz -L
    cp cni-plugins-linux-ppc64le-v1.1.1.tgz /opt/cni/bin
    cd /opt/cni/bin
    tar xvfz cni-plugins-linux-ppc64le-v1.1.1.tgz 
    chmod +x /opt/cni/bin/*
    cd ~
    systemctl restart crio kubelet
    
    1. Download crictl
    curl -L --remote-name-all https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-ppc64le.tar.gz
    tar xvfz crictl-v1.25.0-linux-ppc64le.tar.gz
    chmod +x crictl
    mv crictl /bin
    
    1. Create the kubeadm
    kubeadm init --cri-socket=unix:///var/run/crio/crio.sock --pod-network-cidr=192.168.0.0/16
    
    1. Setup the configuration
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    1. Manually copy over the .kube/config over to the worker node and do a kubeadm reset

    2. Download https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    3. Edit the containers to point to the right instance, per the notes in the yaml to the ppc64le manifests

    4. Update net-conf.json

      net-conf.json: |
        {
          "Network": "192.168.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    
    1. Join the Cluster
    kubeadm join 9.0.90.1:6443 --token xbp7gy.9eem3bta75v0ccw8 \
            --discovery-token-ca-cert-hash sha256:a822342f231db2e730559b4962325a2c2c685d7fc440ae41987e123da47f9118
    
    1. Add role to the workers
    kubectl label node ocp4daily70.ibm.com node-role.kubernetes.io/worker=worker