Category: Application Development

  • PowerVS: Grabbing a VM Instance Console

    1. Create the API Key at https://cloud.ibm.com/iam/apikeys

    2. On the terminal, export the IBMCLOUD_API_KEY.

    $  export IBMCLOUD_API_KEY=...REDACTED...      
    
    1. Login to the IBM Cloud using the commandline tool https://www.ibm.com/cloud/cli
    $ ibmcloud login --apikey "${IBMCLOUD_API_KEY}" -r ca-tor
    API endpoint: https://cloud.ibm.com
    Authenticating...
    OK
    
    Targeted account Demo <-> 1012
    
    Targeted region ca-tor
    
    Users of 'ibmcloud login --vpc-cri' need to use this API to login until July 6, 2022: https://cloud.ibm.com/apidocs/vpc-metadata#create-iam-token
                          
    API endpoint:      https://cloud.ibm.com   
    Region:            ca-tor   
    User:              myuser@us.ibm.com   
    Account:           Demo <-> 1012   
    Resource group:    No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'   
    CF API endpoint:      
    Org:                  
    Space:  
    
    1. List your PowerVS services
    $ ibmcloud pi sl
    Listing services under account Demo as user myuser@us.ibm.com...
    ID                                                                                                                   Name   
    crn:v1:bluemix:public:power-iaas:mon01:a/999999c1f1c29460e8c2e4bb8888888:ADE123-8232-4a75-a9d4-0e1248fa30c6::     demo-service   
    
    1. Target your PowerVS instance
    $ ibmcloud pi st crn:v1:bluemix:public:power-iaas:mon01:a/999999c1f1c29460e8c2e4bb8888888:ADE123-8232-4a75-a9d4-0e1248fa30c6::    
    
    1. List the PowerVS Services’ VMs
    $ ibmcloud pi ins                                                  
    Listing instances under account Demo as user myuser@us.ibm.com...
    ID                                     Name                                   Path   
    12345-ae8f-494b-89f3-5678   control-plane-x       /pcloud/v1/cloud-instances/abc-def-ghi-jkl/pvm-instances/12345-ae8f-494b-89f3-5678   
    
    1. Create a Console for the VM instance you want to look at:
    $ ibmcloud pi ingc control-plane-x
    Getting console for instance control-plane-x under account Demo as user myuser@us.ibm.com...
                     
    Name          control-plane-x   
    Console URL   https://mon01-console.power-iaas.cloud.ibm.com/console/index.html?path=%3Ftoken%3not-real  
    
    1. Click on the Console URL, and view in your browser. it can be very helpful.

    I was able to diagnose that I had the wrong reference image.

  • What to do when you see “Application is not available” on the OpenShift Console

    This post helps those who are stuck with “Application is not available” on the OpenShift Console on IBM Virtual Private Cloud (VPC).

    First, when you access the OpenShift Console you’ll see https://console-openshift-console.hidden.eu-gb.containers.appdomain.cloud/dashboards

    Application is not available

    Steps

    1. Find your worker nodes
    $ oc get nodes -l node-role.kubernetes.io/worker
    NAME         STATUS   ROLES           AGE   VERSION
    worker0   Ready    master,worker   28h   v1.23.5+3afdacb
    worker1   Ready    master,worker   28h   v1.23.5+3afdacb

    2. Launch a debug pod to the node/worker0 and execute a chroot, and curl to confirm it times out.

    $ oc debug node/worker0                                                               
    Starting pod/1024204-debug ...
    To use host binaries, run `chroot /host`
    Pod IP: 10.242.0.4
    If you don't see a command prompt, try pressing enter.
    sh-4.4# chroot /host
    curl google.com -v -k
    * About to connect() to google.com port 80 (#0)
    *   Trying 216.58.212.238...

    If the curl command never completes, then you probably don’t have the VPC set for egress.

    3. Navigate to https://cloud.ibm.com/vpc-ext/network/subnet/

    4. Find your subnet, click on Public Gateway

    5. Retry accessing your Console (You can also retry from the command line oc debug). You should now see the dashboard (note it may need to retry the CrashBackOffLoop for the pod, so it may be a few minutes).

    Appendix: Checking your Console URL

    If you don’t know your external console URL, you can retrieve it from oc.

    $ oc -n openshift-config-managed get cm console-public -o jsonpath='{.data.consoleURL}'
    https://console-openshift-console.hidden.eu-gb.containers.appdomain.cloud

    Appendix: Checking Access Tokens

    If you are using OauthAccessTokens in your environment, and you closed your display, you can always get a view (as a kubeadmin) of the current access tokens using the OpenShift command line.

    $ oc get oauthaccesstokens -A              
    NAME                                                 USER NAME                        CLIENT NAME                CREATED   EXPIRES                         REDIRECT URI                                                              SCOPES
    sha256~-m   IAM#yyy@ibm.com    openshift-browser-client   12m       2022-06-22 15:00:38 +0000 UTC   https://hiddene.eu-gb.containers.cloud.ibm.com:31871/oauth/token/display   user:full
    sha256~x   IAM#g@ibm.com    openshift-browser-client   10m       2022-06-22 15:02:24 +0000 UTC   https://hiddene.eu-gb.containers.cloud.ibm.com:31871/oauth/token/display   user:full
    sha256~z   IAM#x@us.ibm.com          openshift-browser-client   171m      2022-06-22 12:21:30 +0000 UTC   https://hiddene.eu-gb.containers.cloud.ibm.com:31871/oauth/token/display   user:full
    sha256~z   IAM#y@ibm.com   openshift-browser-client   131m      2022-06-22 13:01:18 +0000 UTC   https://hiddene.eu-gb.containers.cloud.ibm.com:31871/oauth/token/display   user:full
    sha256~y   IAM#y@ibm.com   openshift-browser-client   84m       2022-06-22 13:48:29 +0000 UTC   https://hiddene.eu-gb.containers.cloud.ibm.com:31871/oauth/token/display   user:full
    sha256~x   IAM#y@ibm.com   openshift-browser-client   130m      2022-06-22 13:02:25 +0000 UTC   https://hiddene.eu-gb.containers.cloud.ibm.com:31871/oauth/token/display   user:full

    Appendix: Checking the OAuth Well Known

    To check the well known oauth endpoints, check https://hidden-e.eu-gb.containers.cloud.ibm.com:30603/.well-known/oauth-authorization-server

  • OpenShift Descheduler Operator: How-To

    In OpenShift, the kube-scheduler binds a unit of work (Pod) to a Node. The scheduler reads from a scheduling queue the work, retrieves the current state of the cluster, scores the work based on the scheduling rules (from the policy) and the cluster’s state, and prioritizes binding the Pod to a Node.

    These nodes are scheduled based on an instantaneous read of the policy and the environment and a best-estimation placement of the Pod on a Node. With best estimate at the time, these clusters are constantly changing shape and context; there is a need to deschedule and schedule the Pod anew. 

    There are four actors in the Descheduler:

    1. User configures the KubeDescheduler resource
    2. Operator creates the Descheduler Deployment
    3. Descheduler run on a set interval and re-evaluates the scheduled Pod and Node and Policy, setting an eviction if the Pod should be removed based on the Descheduler Policy.
    4. Pod is removed (unbound).

    Thankfully, OpenShift has a Descheduler Operator that more easily facilitates the unbinding of a Pod from a Node based on a cluster-wide configuration of the KubeDescheduler CustomResource. In a single cluster, there is at most one configured KubeDescheduler named cluster (it has to be fixed), and configures one or more Descheduler Profiles

    Descheduler Profiles are predefined and available in the profiles folder – DeschedulerProfile:

    AffinityAndTaintsBalance pods based on node taint violations
    TopologyAndDuplicatesSpreads pods evenly among nodes based on topology constraints and duplicate replicates on the same node   The profile cannot be used with SoftTopologyAndDuplicates.
    SoftTopologyAndDuplicatesSpreads pods with prior with soft constraints   The profile cannot be used with TopologyAndDuplicates.
    LifecycleAndUtilizationBalances pods based on node resource usage   This profile cannot be used with DevPreviewLongLifecycle
    EvictPodsWithLocalStorageEnables pods with local storage to be evicted by the descheduler by all other profiles
    EvictPodsWithPVCPrevents pods with PVCs from being evicted by all other profiles
    DevPreviewLongLifecycleLifecycle management for pods that are ‘long running’   This profile cannot be used with LifecycleAndUtilization

    There must be one or more DeschedulerProfile specified, and there cannot be any duplicates entries. There are two possible mode values – Automatic and Predictive. You have to go the Pod to check the output to see what is Predicted or is Completed.

    The DeschedulerOperator excludes the openshift-*, kube-system and hypershift namespaces.

    Steps

    1.   Login to your OpenShift Cluster 
    oc login --token=sha256~1111-g --server=https://api..sslip.io:6443

    2. Create a Pod that indicates it’s available for eviction using the annotation descheduler.alpha.kubernetes.io/evict: “true” and is updated for the proper node name.

    cat << EOF > pod.yaml 
    kind: Pod
    apiVersion: v1
    metadata:
      annotations:
        descheduler.alpha.kubernetes.io/evict: "true"
      name: demopod1
      labels:
        foo: bar
    spec:
      containers:
      - name: pause
        image: docker.io/ibmcom/pause-ppc64le:3.1
    EOF
    oc apply -f pod.yaml 
    pod/demopod1 created
    • 3. Create the KubeDescheduler CR with a Descheduling Interval of 60 seconds and Pod Lifetime of 1m.
    cat << EOF > kd.yaml 
    apiVersion: operator.openshift.io/v1
    kind: KubeDescheduler
    metadata:
      name: cluster
      namespace: openshift-kube-descheduler-operator
    spec:
      logLevel: Normal
      mode: Predictive
      operatorLogLevel: Normal
      deschedulingIntervalSeconds: 60
      profileCustomizations:
        podLifetime: 1m0s
      observedConfig:
        servingInfo:
          cipherSuites:
            - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
            - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
            - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
            - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
            - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
            - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
          minTLSVersion: VersionTLS12
      profiles:
        - LifecycleAndUtilization
      managementState: Managed
    EOF
    oc apply -f kd.yaml
    • 4. Get the Pods in the openshift-kube-descheduler-operator
    oc get pods -n openshift-kube-descheduler-operator                              
    NAME                                    READY   STATUS    RESTARTS   AGE
    descheduler-f479c5669-5ffxl             1/1     Running   0          2m7s
    descheduler-operator-85fc6666cb-5dfr7   1/1     Running   0          27h
    • 5. Check the Logs for the descheduler pod
    oc -n openshift-kube-descheduler-operator logs descheduler-f479c5669-5ffxl
    I0506 19:59:10.298440       1 pod_lifetime.go:110] "Evicted pod because it exceeded its lifetime" pod="minio-operator/console-7bc65f7dd9-q57lr" maxPodLifeTime=60
    I0506 19:59:10.298500       1 evictions.go:158] "Evicted pod in dry run mode" pod="default/demopod1" reason="PodLifeTime"
    I0506 19:59:10.298532       1 pod_lifetime.go:110] "Evicted pod because it exceeded its lifetime" pod="default/demopod1" maxPodLifeTime=60
    I0506 19:59:10.298598       1 toomanyrestarts.go:90] "Processing node" node="master-0.rdr-rhop-.sslip.io"
    I0506 19:59:10.299118       1 toomanyrestarts.go:90] "Processing node" node="master-1.rdr-rhop.sslip.io"
    I0506 19:59:10.299575       1 toomanyrestarts.go:90] "Processing node" node="master-2.rdr-rhop.sslip.io"
    I0506 19:59:10.300385       1 toomanyrestarts.go:90] "Processing node" node="worker-0.rdr-rhop.sslip.io"
    I0506 19:59:10.300701       1 toomanyrestarts.go:90] "Processing node" node="worker-1.rdr-rhop.sslip.io"
    I0506 19:59:10.301097       1 descheduler.go:287] "Number of evicted pods" totalEvicted=5

    This article shows a simple case for the Descheduler and you can see how it ran a dry run and showed it would evict five pods.

  • Operator Training – Part 1: Concepts and Why Use Go

    A brief Operator training I gave to my team resulted in these notes. Thanks to many others in the reference section.

    An Operator codifies the tasks commonly associated with administrating, operating, and supporting an application.  The codified tasks are event-driven responses to changes (create-update-delete-time) in the declared state relative to the actual state of an application, using domain knowledge to reconcile the state and report on the status.

    Figure 1 Operator Pattern

    Operators are used to execute basic and advanced operations:

    Basic (Helm, Go, Ansible)

    1. Installation and Configuration
    2. Uninstall and Destroy
    3. Seamless Upgrades

    Advanced (Go, Ansible)

    1. Application Lifecycle (Backup, Failure Recovery)
    2. Monitoring, Metrics, Alerts, Log Processing, Workload Analysis
    3. Auto-scaling: Horizontal and Vertical
    4. Event (Anomaly) Detection and Response (Remediation)
    5. Scheduling and Tuning
    6. Application Specific Management
    7. Continuous Testing and Chaos Monkey

    Helm operators wrap helm charts in a simplistic view of the operation pass-through helm verbs, so one can install, uninstall, destroy, and upgrade using an Operator.

    There are four actors in the Operator Pattern.

    1. Initiator – The user who creates the Custom Resource
    2. Operator – The Controller that operates on the Operand
    3. Operand – The target application
    4. OpenShift and Kubernetes Environment
    Figure 2 Common Terms

    Each Operator operates on an Operand using Managed Resources (Kubernetes and OpenShift) to reconcile states.  The states are described in a domain specific language (DSL) encapsulated in a Custom Resource to describe the state of the application:

    1. spec – The User communicates to the Operator the desired state (Operator reads)
    2. status – The Operator communicates back to the User (Operator writes)
    $ oc get authentications cluster -o yaml
    apiVersion: config.openshift.io/v1
    kind: Authentication
    metadata:
      annotations:
        include.release.openshift.io/ibm-cloud-managed: "true"
        include.release.openshift.io/self-managed-high-availability: "true"
        include.release.openshift.io/single-node-developer: "true"
        release.openshift.io/create-only: "true"
    spec:
      oauthMetadata:
        name: ""
      serviceAccountIssuer: ""
      type: ""
      webhookTokenAuthenticator:
        kubeConfig:
          name: webhook-authentication-integrated-oauth
    status:
      integratedOAuthMetadata:
        name: oauth-openshift

    While not limited to writing spec and status, if we think spec is initiator specified, and if we think status is operator written, then we limit the chances of creating an unintended reconciliation loop.

    The DSL is specified as Custom Resource Definition:

    $ oc get crd machinehealthchecks.machine.openshift.io -o=yaml
    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    spec:
      conversion:
        strategy: None
      group: machine.openshift.io
      names:
        kind: MachineHealthCheck
        listKind: MachineHealthCheckList
        plural: machinehealthchecks
        shortNames:
        - mhc
        - mhcs
        singular: machinehealthcheck
      scope: Namespaced
        name: v1beta1
        schema:
          openAPIV3Schema:
            description: 'MachineHealthCheck'
            properties:
              apiVersion:
                description: 'APIVersion defines the versioned schema of this representation'
                type: string
              kind:
                description: 'Kind is a string value representing the REST resource'
                type: string
              metadata:
                type: object
              spec:
                description: Specification of machine health check policy
                properties:
                  expectedMachines:
                    description: total number of machines counted by this machine health
                      check
                    minimum: 0
                    type: integer
                  unhealthyConditions:
                    description: UnhealthyConditions contains a list of the conditions.
                    items:
                      description: UnhealthyCondition represents a Node.
                      properties:
                        status:
                          minLength: 1
                          type: string
                        timeout:
                          description: Expects an unsigned duration string of decimal
                            numbers each with optional fraction and a unit suffix, eg
                            "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us"
                            (or "µs"), "ms", "s", "m", "h".
                          pattern: ^([0-9]+(\.[0-9]+)?(ns|us|µs|ms|s|m|h))+$
                          type: string
                        type:
                          minLength: 1
                          type: string
                      type: object
                    minItems: 1
                    type: array
                type: object

    For example, these operators manage the applications by orchestrating operations based on changes to the CustomResource (DSL):

    Operator Type/LanguageWhat it doesOperations
    cluster-etcd-operator goManages etcd in OpenShiftInstall Monitor Manage
    prometheus-operator goManages Prometheus monitoring on a Kubernetes clusterInstall Monitor Manage Configure
    cluster-authentication-operator goManages OpenShift AuthenticationManage Observe

    As a developer, we’re going to follow a common development pattern:

    1. Implement the Operator Logic (Reconcile the operational state)
    2. Bake Container Image
    3. Create or regenerate Custom Resource Definition (CRD)
    4. Create or regenerate Role-based Access Control (RBAC)
      1. Role
      1. RoleBinding
    5. Apply Operator YAML

    Note, we’re not necessarily writing business logic, rather operational logic.

    There are some best practices we follow:

    1. Develop one operator per application
      1. One CRD per Controller. Created and Fit for Purpose. Less Contention.
      1. No Cross Dependencies.
    2. Use Kubernetes Primitives when Possible
    3. Be Backwards Compatible
    4. Compartmentalize features via multiple controllers
      1. Scale = one controller
      1. Backup = one controller
    5. Use asynchronous metaphors with the synchronous reconciliation loop
      1. Error, then immediate return, backoff and check later
      1. Use concurrency to split the processing / state
    6. Prune Kubernetes Resources when not used
    7. Apps Run when Operators are stopped
    8. Document what the operator does and how it does it
    9. Install in a single command

    We use the Operator SDK – one it’s supported by Red Hat and the CNCF.

    operator-sdk: Which one? Ansible and Go

    Kubernetes is authored in the Go language. Currently, OpenShift uses Go 1.17 and most operators are implemented in Go. The community has built many go-based operators, we have much more support on StackOverflow and a forum.

     AnsibleGo
    Kubernetes SupportCached ClientsSolid, Complete and Rich Kubernetes Client
    Language TypeDeclarative – describe the end stateImperative – describe how to get to the end state
    Operator TypeIndirect Wrapped in the Ansible-OperatorDirect
    StyleSystems AdministrationSystems Programming
    PerformanceLink~4M at startup Single layer scratch image
    SecurityExpanded Surface AreaLimited Surface Area

    Go is ideal for concurrency, strong memory management, everything is baked into the executable deliverable – it’s in memory and ready-to-go. There are lots of alternatives to code NodeJS, Rust, Java, C#, Python. The OpenShift Operators are not necessarily built on the Operator SDK.

    Summary

    We’ve run through a lot of detail on Operators and learned why we should go with Go operators.

    Reference

    1. CNCF Operator White Paper https://github.com/cncf/tag-app-delivery/blob/main/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md
    2. Operator pattern https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
    3. Operator SDK Framework https://sdk.operatorframework.io/docs/overview/
    4. Kubernetes Operators 101, Part 2: How operators work https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work?source=sso#
    5. Build Kubernetes with the Right Tool https://cloud.redhat.com/blog/build-your-kubernetes-operator-with-the-right-tool https://hazelcast.com/blog/build-your-kubernetes-operator-with-the-right-tool/
    6. Build Your Kubernetes Operator with the Right Tool
    7. Operator SDK Best Practices https://sdk.operatorframework.io/docs/best-practices/
    8. Google Best practices for building Kubernetes Operators and stateful apps https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps
    9. Kubernetes Operator Patterns and Best Practises https://github.com/IBM/operator-sample-go
    10. Fast vs Easy: Benchmarking Ansible Operators for Kubernetes https://www.ansible.com/blog/fast-vs-easy-benchmarking-ansible-operators-for-kubernetes
    11. Debugging a Kubernetes Operator https://www.youtube.com/watch?v=8hlx6F4wLAA&t=21s
    12. Contributing to the Image Registry Operator https://github.com/openshift/cluster-image-registry-operator/blob/master/CONTRIBUTING.md
    13. Leszko’s OperatorCon Presentation
      1. YouTube https://www.youtube.com/watch?v=hTapESrAmLc
      1. GitHub Repo for Session: https://github.com/leszko/build-your-operator
  • OpenShift RequestHeader Identity Provider with a Test IdP: My GoLang Test

    I built a demonstration using GoLang, JSON, bcrypt, http client, http server to model an actual IDP. This is a demonstration only; it really helped me setup/understand what’s happening in the RequestHeader.

    OpenShift 4.10: Configuring a request header identity provider enables an external service to act as an identity provider where a X-Remote-User header to identify the user’s identity.

    This document outlines the flow using the haproxy and Apache Httpd already installed on the Bastion server as part of the installation process and a local Go Test IdP to demonstrate the feature.

    The rough flow between OpenShift, the User and the Test IdP is:

    My Code is available at https://github.com/prb112/openshift-auth-request-header

  • Debugging Network Traffic

    Debugging weird traffic patterns on the mac, you can use nettop. It shows the actual amount of data transferred by the process. It’s very helpful.

    Commandline

    nettop -m tcp

    Example

    kernel_task.0                                                                                                      1512 MiB        1041 MiB   387 KiB    11 MiB  1823 KiB
       tcp4 1.1.1.30:52104<->1.1.1.29:548                                                     en0   Established        1512 MiB        1041 MiB   387 KiB    11 MiB  1823 KiB 145.12 ms   791 KiB  1545 KiB    BK_SYS
    vpnagentd.88                                                                                                        158 KiB         554 MiB     0 B       0 B      74 B
       tcp4 1.1.1.30:56141<->1.1.1.12:443                                                  en0   Established          26 KiB          12 KiB     0 B       0 B      74 B    77.25 ms   128 KiB    32 KiB        BE
       tcp4 127.0.0.1:29754<->localhost:49229                                                 lo0   Established         131 KiB         554 MiB     0 B       0 B       0 B     1.22 ms   266 KiB   379 KiB        BE
    com.crowdstrike.341                                                                                                 995 KiB        5615 KiB   675 B     279 B      29 KiB
       tcp4 1.1.1.30:51978<->ec2-50-18-194-39.us-west-1.compute.amazonaws.com:443        en0   Established         995 KiB        5615 KiB   675 B     279 B      29 KiB  93.69 ms   128 KiB    55 KiB        RD
    
  • Using OpenShift Plugin for oc

    For those managing OpenShift clusters, the oc tool manages all the OpenShift resources with handy commands for OpenShift and Kubernetes. The OpenShift Client CLI (oc) project is built on top of kubectl adding built-in features to simplify interactions with an OpenShift cluster.

    Much like the kubectl, the oc cli tool provides a feature to Extend the OpenShift CLI with plug-ins. The oc plugins feature is a client-side feature to faciliate interactions with extensions commands; found in the current user’s path. There is an ecosystem of plugins through the community and the Krew Plugin List.

    These plugins include:

    1. cost accessess Kubernetes cost allocation metrics
    2. outdated displays all out-of-date images running in a Kubernetes cluster
    3. pod-lens shows pod-related resource information
    4. k9s is a terminal based UI to interact with your Kubernetes clusters.
    5. sample-cli-plugin which is a simple example to show how to switch namespaces in k8s. I’m not entirely certain that this works with OpenShift.

    These plugins have a wide range of support and code. Some of the plugins are based on python, others are based on go and bash.

    oc expands the plugin search path pkg/cli/kubectlwrappers/wrappers.go in plugin.ValidPluginFilenamePrefixes = []string{"oc", "kubectl"} so whole new OpenShift specific plugins are supported. The OpenShift team has also released a number of plugins:

    1. oc-mirror manages OpenShift release, operator catalog, helm charts, and associated container images for mirror registries that support OpenShift environments
    2. oc-compliance facilitates using the OpenShift Compliance operator.

    Many of these extensions/plugins are installed using krew; krew is a plugin manager for kubectl. Some users create a directory .kube/plugins and install their plugins in that folder. The plugins folder is then added to the user’s path.

    Creating your own Extension

    1. Check to see if any plugins exist:
    $ oc plugin list
    The following compatible plugins are available:
    
    /Users/user/.kube/plugins/oc-test
    

    If none exist, it’ll prompt you that none are found in the path, and you can install from krew.

    1. Create a new file oc-test
    #! /usr/bin/env bash
    
    echo "Execution Time: $(date)"
    
    echo ""
    ps -Sf
    echo ""
    
    echo "Arguments: $@"
    
    echo "Environment Variables: "
    env
    echo ""
    
    oc version --client
    
    1. Add the file to the path.
    export PATH=~/.kube/plugins:$PATH
    
    1. Execute the oc plugin test (note the oc is stripped off)
    Execution Time: Wed Mar 30 11:22:19 EDT 2022
    
      UID   PID  PPID   C STIME   TTY           TIME CMD
      501  3239  3232   0 15Mar22 ttys000    0:01.39 -zsh
      501 80267  3239   0 17Mar22 ttys000    0:00.03 tmux
      501 54273 11494   0 Tue10AM ttys001    0:00.90 /bin/zsh -l
      501 80319 80269   0 17Mar22 ttys002    0:00.30 -zsh
      501  2430  2429   0 15Mar22 ttys003    0:03.17 -zsh
      501 78925  2430   0 11:22AM ttys003    0:00.09 bash /Users/user/.kube/plugins/oc-test test
      501 80353 80269   0 17Mar22 ttys004    0:02.07 -zsh
      501 91444 11494   0 18Mar22 ttys005    0:01.55 /bin/zsh -l
    
    Arguments: test
    Environment Variables: 
    SHELL=/bin/zsh
    TERM=xterm-256color
    ZSH=/Users/user/.oh-my-zsh
    USER=user
    PATH=/Users/user/.kube/plugins:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin
    PWD=/Users/user/Downloads
    LANG=en_US.UTF-8
    HOME=/Users/user
    LESS=-R
    LOGNAME=user
    SECURITYSESSIONID=user
    _=/usr/bin/env
    
    Client Version: 4.10.6
    

    For the above, a simple plugin demonstration is shown.

    Reference

    1. Getting started with the OpenShift CLI
    2. Extending the OpenShift CLI with plug-ins
    3. https://cloud.redhat.com/blog/augmenting-openshift-cli-with-plugins
    4. https://cloudcult.dev/tcpdump-for-openshift-workloads/
  • Learning Resources for Operators – First Two Weeks Notes

    To quote the Kubernetes website, “The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.” The following is an compendium to use while Learning Operators.

    The defacto SDK to use is the Operator SDK which provides HELM, Ansible and GO scaffolding to support your implementation of the Operator pattern.

    The following are education classes on the OperatorSDK

    When Running through the CO0201EN intermediate operators course, I did hit the case where I had to create a ClusterRole and ClusterRoleBinding for the ServiceAccount, here is a snippet that might helper others:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      namespace: memcached-operator-system
      name: service-reader-cr-mc
    rules:
    - apiGroups: ["cache.bastide.org"] # "" indicates the core API group
      resources: ["memcacheds"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      namespace: memcached-operator-system
      name: ext-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: service-reader-cr-mc
    subjects:
    - kind: ServiceAccount
      namespace: memcached-operator-system
      name: memcached-operator-controller-manager

    The reason for the above, I missed adding a kubebuilder declaration:

    //+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
    //+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch

    Thanks to https://stackoverflow.com/a/60334649/1873438

    The following are articles worth reviewing:

    The following are good Go resources:

    1. Go Code Comments – To write idiomatic Go, you should review the Code Review comments.
    2. Getting to Go: The Journey of Go’s Garbage Collector – The reference for Go and Garbage Collection in go
    3. An overview of memory management in Go – good overview of Go Memory Management
    4. Golang: Cost of using the heap – net 1M allocation seems to stay in the stack, outside it seems to be on the heap
    5. golangci-lint – The aggregated linters project is worthy of an installation and use. It’ll catch many issues and has a corresponding GitHub Action.
    6. Go in 3 Weeks A comprehensive training for Go. Companion to GitHub Repo
    7. Defensive Coding Guide: The Go Programming Language

    The following are good OpenShift resources:

    1. Create OpenShift Plugins – You must have a CLI plug-in file that begins with oc- or kubectl-. You create a file and put it in /usr/local/bin/
    2. Details on running Code Ready Containers on Linux – The key hack I learned awas to ssh -i ~/.crc/machines/crc/id_ecdsa core@<any host in the /etc/hosts>
      1. I ran on VirtualBox Ubuntu 20.04 with Guest Additions Installed
      2. Virtual Box Settings for the Machine – 6 CPU, 18G
        1. System > Processor > Enable PAE/NX and Enable Nested VT-X/AMD-V (which is a must for it to work)
        1. Network > Change Adapter Type to virtio-net and Set Promiscuous Mode to Allow VMS
      3. Install openssh-server so you can login remotely
      4. It will not install without a windowing system, so I have the default windowing environment installed.
      5. Note, I still get a failure on startup complaining about a timeout. I waited about 15 minutes post this, and the command oc get nodes –context admin –cluster crc –kubeconfig .crc/cache/crc_libvirt_4.10.3_amd64/kubeconfig now works.
    3. CRC virsh cheatsheet – If you are running Code Ready Containers and need to debug, you can use the virsh cheatsheet.
  • Hack: Fast Forwarding a Video

    I had to watch 19 hours of slow paced videos for a training on a new software product (at least new to me). I like fast paced trainings… enter a browser hack.

    In Firefox, Navigate to Tools > Browser Tools > Web Developer Tools

    Click Console

    Type the following snippet to find the first video on a page, and change the playback rate, and Click Enter.

    document.getElementById(document.getElementsByTagName('video').item(0).id).playbackRate = 4.0

    Note, 4.0 can be unintelligible, you’ll need to tweak the speed to match what you need. I found 2.5 to 3.0 to be very comfortable (you just can’t multitask).