Category: OpenShift

  • Accessing and Using the Internal OpenShift Registry

    The following is how to enable the OpenShift Internal Registry on the IBM Cloud’s hosted OpenShift.

    1. Login to ibmcloud using the commandline tool
    $ ibmcloud login --sso    
    
    1. Select your account

    2. Setup OpenShift Cluster access

    $ ibmcloud oc cluster config -c rdr-ocp-base-lon06-pdb --admin
    
    1. Type oc login (it’ll tell you where to request the oauth token)

    2. Get a token at https://XYZ.com:31344/oauth/token/request

    $ oc login --token=sha256~aaa --server=https://XYZ.com:31609
    Logged into "https://XYZ.com:31609" as "IAM#xyz@us.ibm.com" using the token provided.
    
    You have access to 63 projects, the list has been suppressed. You can list all projects with 'oc projects'
    
    Using project "default".
    
    1. Setup the external route for the Image Registry
    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
    config.imageregistry.operator.openshift.io/cluster patched
    
    1. Check the OpenShift Image registry host and you see the hostname printed.
    $ oc get route default-route -n openshift-image-registry --template='{{.spec.host }}'
    default-route-openshift-image-registry.xyz.cloud
    
    1. Make the local registry lookup use relative names
    $ oc set image-lookup  --all
    
    1. Login to the Docker Registry
    $ docker login -u $(oc whoami) -p $(oc whoami -t) default-route-openshift-image-registry.xyz.cloud
    Login Succeeded
    
    1. Pull Nginx
    $ docker pull nginx
    
    1. Tag the Image for the Image Registry
    $ docker tag nginx:latest default-route-openshift-image-registry.xyz.cloud/$(oc project --short=true)/nginx-int:latest
    
    1. Push the Image into the OpenShift Image Registry
    $ docker push default-route-openshift-image-registry.xyz.cloud/$(oc project --short=true)/nginx-int:latest 
    
    1. Use image-registry.openshift-image-registry.svc:5000/default/nginx-int:latest as the image name in your deployment
    $ oc run test-a --image image-registry.openshift-image-registry.svc:5000/default/nginx-int:latest
    pod/test-a created
    

    Reference

    1. OpenShift 4.10: Exposing the Image Registry
  • OpenShift Kube Descheduler Operator – Profile Examples

    For the last few weeks, I’ve been working with the OpenShift Kube Descheduler and OpenShift Kube Descheduler Operator.

    I posted some test-cases for the seven Descheduler Profiles to demonstrate how the Profiles operate under specific conditions. Note these are unsupported test cases.

    The following examples are:

    1. AffinityAndTaints
    2. TopologyAndDuplicates
    3. LifecycleAndUtilization
    4. SoftTopologyAndDuplicates
    5. EvictPodsWithLocalStorage
    6. EvictPodsWithPVC
    7. DevPreviewLongLifecycle

    Summary

    I hope this helps you adopt the OpenShift Kube Descheduler Operator.

    References

    1. Evicting pods using the descheduler
    2. Kubernetes: Pod Topology Spread Constraints Use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
    3. Kubernetes: Inter-pod affinity and anti-affinity Inter-pod affinity and anti-affinity allow you to constrain which nodes your Pods can be scheduled on based on the labels of Pods already running on that node, instead of the node labels.
    4. Kubernetes: Well-Known labels and taints
    5. Adding Labels to a Running Pod
    6. Label Selector for k8s.io
    7. Pod Affinity and AntiAffinity Examples
    8. Scheduling pods using a scheduler profile
    9. Kubernetes: Assigning Pods to Nodes
    10. OpenShift 3.11: Advanced Scheduling and Pod Affinity/Anti-affinity
    11. Kubernetes: Pod Lifecycle
    12. Base Profiles
    13. Descheduler User Guide
    14. Kubernetes: Scheduling Framework
    15. GitHub: openshift/cluster-kube-descheduler-operator
  • OpenShift Descheduler Operator: How-To

    In OpenShift, the kube-scheduler binds a unit of work (Pod) to a Node. The scheduler reads from a scheduling queue the work, retrieves the current state of the cluster, scores the work based on the scheduling rules (from the policy) and the cluster’s state, and prioritizes binding the Pod to a Node.

    These nodes are scheduled based on an instantaneous read of the policy and the environment and a best-estimation placement of the Pod on a Node. With best estimate at the time, these clusters are constantly changing shape and context; there is a need to deschedule and schedule the Pod anew. 

    There are four actors in the Descheduler:

    1. User configures the KubeDescheduler resource
    2. Operator creates the Descheduler Deployment
    3. Descheduler run on a set interval and re-evaluates the scheduled Pod and Node and Policy, setting an eviction if the Pod should be removed based on the Descheduler Policy.
    4. Pod is removed (unbound).

    Thankfully, OpenShift has a Descheduler Operator that more easily facilitates the unbinding of a Pod from a Node based on a cluster-wide configuration of the KubeDescheduler CustomResource. In a single cluster, there is at most one configured KubeDescheduler named cluster (it has to be fixed), and configures one or more Descheduler Profiles

    Descheduler Profiles are predefined and available in the profiles folder – DeschedulerProfile:

    AffinityAndTaintsBalance pods based on node taint violations
    TopologyAndDuplicatesSpreads pods evenly among nodes based on topology constraints and duplicate replicates on the same node   The profile cannot be used with SoftTopologyAndDuplicates.
    SoftTopologyAndDuplicatesSpreads pods with prior with soft constraints   The profile cannot be used with TopologyAndDuplicates.
    LifecycleAndUtilizationBalances pods based on node resource usage   This profile cannot be used with DevPreviewLongLifecycle
    EvictPodsWithLocalStorageEnables pods with local storage to be evicted by the descheduler by all other profiles
    EvictPodsWithPVCPrevents pods with PVCs from being evicted by all other profiles
    DevPreviewLongLifecycleLifecycle management for pods that are ‘long running’   This profile cannot be used with LifecycleAndUtilization

    There must be one or more DeschedulerProfile specified, and there cannot be any duplicates entries. There are two possible mode values – Automatic and Predictive. You have to go the Pod to check the output to see what is Predicted or is Completed.

    The DeschedulerOperator excludes the openshift-*, kube-system and hypershift namespaces.

    Steps

    1.   Login to your OpenShift Cluster 
    oc login --token=sha256~1111-g --server=https://api..sslip.io:6443

    2. Create a Pod that indicates it’s available for eviction using the annotation descheduler.alpha.kubernetes.io/evict: “true” and is updated for the proper node name.

    cat << EOF > pod.yaml 
    kind: Pod
    apiVersion: v1
    metadata:
      annotations:
        descheduler.alpha.kubernetes.io/evict: "true"
      name: demopod1
      labels:
        foo: bar
    spec:
      containers:
      - name: pause
        image: docker.io/ibmcom/pause-ppc64le:3.1
    EOF
    oc apply -f pod.yaml 
    pod/demopod1 created
    • 3. Create the KubeDescheduler CR with a Descheduling Interval of 60 seconds and Pod Lifetime of 1m.
    cat << EOF > kd.yaml 
    apiVersion: operator.openshift.io/v1
    kind: KubeDescheduler
    metadata:
      name: cluster
      namespace: openshift-kube-descheduler-operator
    spec:
      logLevel: Normal
      mode: Predictive
      operatorLogLevel: Normal
      deschedulingIntervalSeconds: 60
      profileCustomizations:
        podLifetime: 1m0s
      observedConfig:
        servingInfo:
          cipherSuites:
            - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
            - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
            - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
            - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
            - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
            - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
          minTLSVersion: VersionTLS12
      profiles:
        - LifecycleAndUtilization
      managementState: Managed
    EOF
    oc apply -f kd.yaml
    • 4. Get the Pods in the openshift-kube-descheduler-operator
    oc get pods -n openshift-kube-descheduler-operator                              
    NAME                                    READY   STATUS    RESTARTS   AGE
    descheduler-f479c5669-5ffxl             1/1     Running   0          2m7s
    descheduler-operator-85fc6666cb-5dfr7   1/1     Running   0          27h
    • 5. Check the Logs for the descheduler pod
    oc -n openshift-kube-descheduler-operator logs descheduler-f479c5669-5ffxl
    I0506 19:59:10.298440       1 pod_lifetime.go:110] "Evicted pod because it exceeded its lifetime" pod="minio-operator/console-7bc65f7dd9-q57lr" maxPodLifeTime=60
    I0506 19:59:10.298500       1 evictions.go:158] "Evicted pod in dry run mode" pod="default/demopod1" reason="PodLifeTime"
    I0506 19:59:10.298532       1 pod_lifetime.go:110] "Evicted pod because it exceeded its lifetime" pod="default/demopod1" maxPodLifeTime=60
    I0506 19:59:10.298598       1 toomanyrestarts.go:90] "Processing node" node="master-0.rdr-rhop-.sslip.io"
    I0506 19:59:10.299118       1 toomanyrestarts.go:90] "Processing node" node="master-1.rdr-rhop.sslip.io"
    I0506 19:59:10.299575       1 toomanyrestarts.go:90] "Processing node" node="master-2.rdr-rhop.sslip.io"
    I0506 19:59:10.300385       1 toomanyrestarts.go:90] "Processing node" node="worker-0.rdr-rhop.sslip.io"
    I0506 19:59:10.300701       1 toomanyrestarts.go:90] "Processing node" node="worker-1.rdr-rhop.sslip.io"
    I0506 19:59:10.301097       1 descheduler.go:287] "Number of evicted pods" totalEvicted=5

    This article shows a simple case for the Descheduler and you can see how it ran a dry run and showed it would evict five pods.

  • Operator Training – Part 1: Concepts and Why Use Go

    A brief Operator training I gave to my team resulted in these notes. Thanks to many others in the reference section.

    An Operator codifies the tasks commonly associated with administrating, operating, and supporting an application.  The codified tasks are event-driven responses to changes (create-update-delete-time) in the declared state relative to the actual state of an application, using domain knowledge to reconcile the state and report on the status.

    Figure 1 Operator Pattern

    Operators are used to execute basic and advanced operations:

    Basic (Helm, Go, Ansible)

    1. Installation and Configuration
    2. Uninstall and Destroy
    3. Seamless Upgrades

    Advanced (Go, Ansible)

    1. Application Lifecycle (Backup, Failure Recovery)
    2. Monitoring, Metrics, Alerts, Log Processing, Workload Analysis
    3. Auto-scaling: Horizontal and Vertical
    4. Event (Anomaly) Detection and Response (Remediation)
    5. Scheduling and Tuning
    6. Application Specific Management
    7. Continuous Testing and Chaos Monkey

    Helm operators wrap helm charts in a simplistic view of the operation pass-through helm verbs, so one can install, uninstall, destroy, and upgrade using an Operator.

    There are four actors in the Operator Pattern.

    1. Initiator – The user who creates the Custom Resource
    2. Operator – The Controller that operates on the Operand
    3. Operand – The target application
    4. OpenShift and Kubernetes Environment
    Figure 2 Common Terms

    Each Operator operates on an Operand using Managed Resources (Kubernetes and OpenShift) to reconcile states.  The states are described in a domain specific language (DSL) encapsulated in a Custom Resource to describe the state of the application:

    1. spec – The User communicates to the Operator the desired state (Operator reads)
    2. status – The Operator communicates back to the User (Operator writes)
    $ oc get authentications cluster -o yaml
    apiVersion: config.openshift.io/v1
    kind: Authentication
    metadata:
      annotations:
        include.release.openshift.io/ibm-cloud-managed: "true"
        include.release.openshift.io/self-managed-high-availability: "true"
        include.release.openshift.io/single-node-developer: "true"
        release.openshift.io/create-only: "true"
    spec:
      oauthMetadata:
        name: ""
      serviceAccountIssuer: ""
      type: ""
      webhookTokenAuthenticator:
        kubeConfig:
          name: webhook-authentication-integrated-oauth
    status:
      integratedOAuthMetadata:
        name: oauth-openshift

    While not limited to writing spec and status, if we think spec is initiator specified, and if we think status is operator written, then we limit the chances of creating an unintended reconciliation loop.

    The DSL is specified as Custom Resource Definition:

    $ oc get crd machinehealthchecks.machine.openshift.io -o=yaml
    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    spec:
      conversion:
        strategy: None
      group: machine.openshift.io
      names:
        kind: MachineHealthCheck
        listKind: MachineHealthCheckList
        plural: machinehealthchecks
        shortNames:
        - mhc
        - mhcs
        singular: machinehealthcheck
      scope: Namespaced
        name: v1beta1
        schema:
          openAPIV3Schema:
            description: 'MachineHealthCheck'
            properties:
              apiVersion:
                description: 'APIVersion defines the versioned schema of this representation'
                type: string
              kind:
                description: 'Kind is a string value representing the REST resource'
                type: string
              metadata:
                type: object
              spec:
                description: Specification of machine health check policy
                properties:
                  expectedMachines:
                    description: total number of machines counted by this machine health
                      check
                    minimum: 0
                    type: integer
                  unhealthyConditions:
                    description: UnhealthyConditions contains a list of the conditions.
                    items:
                      description: UnhealthyCondition represents a Node.
                      properties:
                        status:
                          minLength: 1
                          type: string
                        timeout:
                          description: Expects an unsigned duration string of decimal
                            numbers each with optional fraction and a unit suffix, eg
                            "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us"
                            (or "µs"), "ms", "s", "m", "h".
                          pattern: ^([0-9]+(\.[0-9]+)?(ns|us|µs|ms|s|m|h))+$
                          type: string
                        type:
                          minLength: 1
                          type: string
                      type: object
                    minItems: 1
                    type: array
                type: object

    For example, these operators manage the applications by orchestrating operations based on changes to the CustomResource (DSL):

    Operator Type/LanguageWhat it doesOperations
    cluster-etcd-operator goManages etcd in OpenShiftInstall Monitor Manage
    prometheus-operator goManages Prometheus monitoring on a Kubernetes clusterInstall Monitor Manage Configure
    cluster-authentication-operator goManages OpenShift AuthenticationManage Observe

    As a developer, we’re going to follow a common development pattern:

    1. Implement the Operator Logic (Reconcile the operational state)
    2. Bake Container Image
    3. Create or regenerate Custom Resource Definition (CRD)
    4. Create or regenerate Role-based Access Control (RBAC)
      1. Role
      1. RoleBinding
    5. Apply Operator YAML

    Note, we’re not necessarily writing business logic, rather operational logic.

    There are some best practices we follow:

    1. Develop one operator per application
      1. One CRD per Controller. Created and Fit for Purpose. Less Contention.
      1. No Cross Dependencies.
    2. Use Kubernetes Primitives when Possible
    3. Be Backwards Compatible
    4. Compartmentalize features via multiple controllers
      1. Scale = one controller
      1. Backup = one controller
    5. Use asynchronous metaphors with the synchronous reconciliation loop
      1. Error, then immediate return, backoff and check later
      1. Use concurrency to split the processing / state
    6. Prune Kubernetes Resources when not used
    7. Apps Run when Operators are stopped
    8. Document what the operator does and how it does it
    9. Install in a single command

    We use the Operator SDK – one it’s supported by Red Hat and the CNCF.

    operator-sdk: Which one? Ansible and Go

    Kubernetes is authored in the Go language. Currently, OpenShift uses Go 1.17 and most operators are implemented in Go. The community has built many go-based operators, we have much more support on StackOverflow and a forum.

     AnsibleGo
    Kubernetes SupportCached ClientsSolid, Complete and Rich Kubernetes Client
    Language TypeDeclarative – describe the end stateImperative – describe how to get to the end state
    Operator TypeIndirect Wrapped in the Ansible-OperatorDirect
    StyleSystems AdministrationSystems Programming
    PerformanceLink~4M at startup Single layer scratch image
    SecurityExpanded Surface AreaLimited Surface Area

    Go is ideal for concurrency, strong memory management, everything is baked into the executable deliverable – it’s in memory and ready-to-go. There are lots of alternatives to code NodeJS, Rust, Java, C#, Python. The OpenShift Operators are not necessarily built on the Operator SDK.

    Summary

    We’ve run through a lot of detail on Operators and learned why we should go with Go operators.

    Reference

    1. CNCF Operator White Paper https://github.com/cncf/tag-app-delivery/blob/main/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md
    2. Operator pattern https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
    3. Operator SDK Framework https://sdk.operatorframework.io/docs/overview/
    4. Kubernetes Operators 101, Part 2: How operators work https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work?source=sso#
    5. Build Kubernetes with the Right Tool https://cloud.redhat.com/blog/build-your-kubernetes-operator-with-the-right-tool https://hazelcast.com/blog/build-your-kubernetes-operator-with-the-right-tool/
    6. Build Your Kubernetes Operator with the Right Tool
    7. Operator SDK Best Practices https://sdk.operatorframework.io/docs/best-practices/
    8. Google Best practices for building Kubernetes Operators and stateful apps https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps
    9. Kubernetes Operator Patterns and Best Practises https://github.com/IBM/operator-sample-go
    10. Fast vs Easy: Benchmarking Ansible Operators for Kubernetes https://www.ansible.com/blog/fast-vs-easy-benchmarking-ansible-operators-for-kubernetes
    11. Debugging a Kubernetes Operator https://www.youtube.com/watch?v=8hlx6F4wLAA&t=21s
    12. Contributing to the Image Registry Operator https://github.com/openshift/cluster-image-registry-operator/blob/master/CONTRIBUTING.md
    13. Leszko’s OperatorCon Presentation
      1. YouTube https://www.youtube.com/watch?v=hTapESrAmLc
      1. GitHub Repo for Session: https://github.com/leszko/build-your-operator
  • Proof-of-Concept: OpenShift on Power: Configuring an OpenID Connect identity provider

    This document outlines the installation of the OpenShift on Power, the installation of the Red Hat Single Sign-On Operator and configuring the two to work together on OCP.

    Thanks to Zhimin Wen who helped in my setup of the OIDC with his great work.

    Steps

    1. Setup OpenShift Container Platform (OCP) 4.x on IBM® Power Systems™ Virtual Server on IBM Cloud using the Terraform based automation code using the documentation provided. You’ll need to update var.tfvars to match your environment and PowerVS Service settings.
    terraform init --var-file=var.tfvars
    terraform apply --var-file=var.tfvars
    
    1. At the end of the deployment, you see an output pointing to the Bastion Server.
    bastion_private_ip = "192.168.*.*"
    bastion_public_ip = "158.*.*.*"
    bastion_ssh_command = "ssh -i data/id_rsa root@158.*.*.*"
    bootstrap_ip = "192.168.*.*"
    cluster_authentication_details = "Cluster authentication details are available in 158.*.*.* under ~/openstack-upi/auth"
    cluster_id = "ocp-oidc-test-cb68"
    install_status = "COMPLETED"
    master_ips = [
      "192.168.*.*",
      "192.168.*.*",
      "192.168.*.*",
    ]
    oc_server_url = "https://api.ocp-oidc-test-cb68.*.*.*.*.xip.io:6443"
    storageclass_name = "nfs-storage-provisioner"
    web_console_url = "https://console-openshift-console.apps.ocp-oidc-test-cb68.*.*.*.*.xip.io"
    worker_ips = [
      "192.168.*.*",
      "192.168.*.*",
    ]
    
    1. Add Hosts Entry
    127.0.0.1 console-openshift-console.apps.ocp-oidc-test-cb68.*.xip.io api.ocp-oidc-test-cb68.*.xip.io oauth-openshift.apps.ocp-oidc-test-cb68.*.xip.io
    
    1. Connect via SSH
    sudo ssh -i data/id_rsa -L 5900:localhost:5901 -L443:localhost:443 -L6443:localhost:6443 -L8443:localhost:8443 root@*
    

    You’re connecting on the commandline for a reason with ports forwarded since not all ports are open on the Bastion Server.

    1. Find the OpenShift kubeadmin password in openstack-upi/auth/kubeadmin-password
    cat openstack-upi/auth/kubeadmin-password
    eZ2Hq-JUNK-JUNKB4-JUNKZN
    
    1. From Login into the web_console_url, navigate to https://console-openshift-console.apps.ocp-oidc-test-cb68.*.xip.io/

    If prompted, accept Security Warnings

    1. Login with the Kubeadmin credentials when promtped
    2. Click OperatorHub
    3. Search for Keycloak
    4. Select Red Hat Single Sign-On Operator
    5. Click Install
    6. On the Install Operator Screen:
      1. Select alpha channel
      2. Select namespace default (if you prefer an alternative namespace, that’s fine this is just a demo)
      3. Click Install
    7. Click on Installed Operators
    8. Watch rhsso-operator for a completed installation, the status should show Succeeded
    9. Once ready, click on the Operator > Red Hat Single Sign-On Operator
    10. Click on Keycloak, create Keycloak
    11. Enter the following YAML:
    apiVersion: keycloak.org/v1alpha1
    kind: Keycloak
    metadata:
      name: example-keycloak
      labels:
        app: sso
    spec:
      instances: 1
      externalAccess:
        enabled: true
    
    1. Once it’s deployed, click on example-keycloak > YAML. Look for status.externalURL.
    status:
      credentialSecret: credential-example-keycloak
      externalURL: 'https://keycloak-default.apps.ocp-oidc-test-cb68.*.xip.io'
    
    1. Update the /etc/hosts with
    127.0.0.1 keycloak-default.apps.ocp-oidc-test-cb68.*.xip.io 
    
    1. Click Workloads > Secrets
    2. Click on credential-example-keycloak
    3. Click Reveal values
    U: admin
    P: <<hidden>>
    
    1. For Keycloak, login to https://keycloak-default.apps.ocp-oidc-test-cb68.*.xip.io/auth/admin/master/console/#/realms/master using the revealed secret
    2. Click Add Realm
    3. Enter name test.
    4. Click Create
    5. Click Client
    6. Click Create
    7. Enter ClientId – test
    8. Select openid-connect
    9. Click Save
    10. Click Keys
    11. Click Generate new keys and certificate
    12. Click Settings > Access Type
    13. Select confidential
    14. Enter Valid Redirect URIs https://* we could set this as the OAuth url such as https://oauth-openshift.apps.ocp-oidc-test-cb68.*.xip.io/*
    15. Click Credentials (Copy the Secret), such as:
    43f4e544-fa95-JUNK-a298-JUNK
    
    1. Under Generate Private Key…
      1. Select Archive Format JKS
      2. Key Password: password
      3. Store Password: password
      4. Click Generate and Download
    2. On the Bastion server, create the keycloak secret
    oc -n openshift-config create secret generic keycloak-client-secret --from-literal=clientSecret=43f4e544-fa95-JUNK-a298-JUNK
    configmap "keycloak-ca" deleted
    
    1. Grab the ingress CA
    oc -n openshift-ingress-operator get secret router-ca -o jsonpath="{ .data.tls\.crt }" | base64 -d -i > ca.crt
    
    1. Create the keycloak CA secret
    oc -n openshift-config create cm keycloak-ca --from-file=ca.crt
    configmap/keycloak-ca created
    
    1. Create the openid Auth Provider
    apiVersion: config.openshift.io/v1
    kind: OAuth
    metadata:
      name: cluster
    spec:
      identityProviders:
        - name: keycloak 
          mappingMethod: claim 
          type: OpenID
          openID:
            clientID: console
            clientSecret:
              name: keycloak-client-secret
            ca:
              name: keycloak-ca
            claims: 
              preferredUsername:
              - preferred_username
              name:
              - name
              email:
              - email
            issuer: https://keycloak-default.apps.ocp-oidc-test-cb68.*.xip.io/auth/realms/test
    
    1. Logout of the Kubeadmin
    2. On Keycloak, Manage > Users, Click add a user with an email and password. Click Save
    3. Click Credentials
    4. Enter a new password and confirm
    5. Turn Temporary Password off
    6. Navigate to the web_console_url
    7. Select the new IdP
    8. Login with the new user

    There is a clear support for OIDC Connect already enabled on OpenShift, and this document outlines how to test with Keycloak.

    A handy link for debugging is the openid-configuration

    Reference

    Blog: Keycloak OIDC Identity Provider for OpenShift

    Proof-of-Concept: OpenShift on Power: Configuring an OpenID Connect identity provider

  • OpenShift RequestHeader Identity Provider with a Test IdP: My GoLang Test

    I built a demonstration using GoLang, JSON, bcrypt, http client, http server to model an actual IDP. This is a demonstration only; it really helped me setup/understand what’s happening in the RequestHeader.

    OpenShift 4.10: Configuring a request header identity provider enables an external service to act as an identity provider where a X-Remote-User header to identify the user’s identity.

    This document outlines the flow using the haproxy and Apache Httpd already installed on the Bastion server as part of the installation process and a local Go Test IdP to demonstrate the feature.

    The rough flow between OpenShift, the User and the Test IdP is:

    My Code is available at https://github.com/prb112/openshift-auth-request-header

  • Using OpenShift Plugin for oc

    For those managing OpenShift clusters, the oc tool manages all the OpenShift resources with handy commands for OpenShift and Kubernetes. The OpenShift Client CLI (oc) project is built on top of kubectl adding built-in features to simplify interactions with an OpenShift cluster.

    Much like the kubectl, the oc cli tool provides a feature to Extend the OpenShift CLI with plug-ins. The oc plugins feature is a client-side feature to faciliate interactions with extensions commands; found in the current user’s path. There is an ecosystem of plugins through the community and the Krew Plugin List.

    These plugins include:

    1. cost accessess Kubernetes cost allocation metrics
    2. outdated displays all out-of-date images running in a Kubernetes cluster
    3. pod-lens shows pod-related resource information
    4. k9s is a terminal based UI to interact with your Kubernetes clusters.
    5. sample-cli-plugin which is a simple example to show how to switch namespaces in k8s. I’m not entirely certain that this works with OpenShift.

    These plugins have a wide range of support and code. Some of the plugins are based on python, others are based on go and bash.

    oc expands the plugin search path pkg/cli/kubectlwrappers/wrappers.go in plugin.ValidPluginFilenamePrefixes = []string{"oc", "kubectl"} so whole new OpenShift specific plugins are supported. The OpenShift team has also released a number of plugins:

    1. oc-mirror manages OpenShift release, operator catalog, helm charts, and associated container images for mirror registries that support OpenShift environments
    2. oc-compliance facilitates using the OpenShift Compliance operator.

    Many of these extensions/plugins are installed using krew; krew is a plugin manager for kubectl. Some users create a directory .kube/plugins and install their plugins in that folder. The plugins folder is then added to the user’s path.

    Creating your own Extension

    1. Check to see if any plugins exist:
    $ oc plugin list
    The following compatible plugins are available:
    
    /Users/user/.kube/plugins/oc-test
    

    If none exist, it’ll prompt you that none are found in the path, and you can install from krew.

    1. Create a new file oc-test
    #! /usr/bin/env bash
    
    echo "Execution Time: $(date)"
    
    echo ""
    ps -Sf
    echo ""
    
    echo "Arguments: $@"
    
    echo "Environment Variables: "
    env
    echo ""
    
    oc version --client
    
    1. Add the file to the path.
    export PATH=~/.kube/plugins:$PATH
    
    1. Execute the oc plugin test (note the oc is stripped off)
    Execution Time: Wed Mar 30 11:22:19 EDT 2022
    
      UID   PID  PPID   C STIME   TTY           TIME CMD
      501  3239  3232   0 15Mar22 ttys000    0:01.39 -zsh
      501 80267  3239   0 17Mar22 ttys000    0:00.03 tmux
      501 54273 11494   0 Tue10AM ttys001    0:00.90 /bin/zsh -l
      501 80319 80269   0 17Mar22 ttys002    0:00.30 -zsh
      501  2430  2429   0 15Mar22 ttys003    0:03.17 -zsh
      501 78925  2430   0 11:22AM ttys003    0:00.09 bash /Users/user/.kube/plugins/oc-test test
      501 80353 80269   0 17Mar22 ttys004    0:02.07 -zsh
      501 91444 11494   0 18Mar22 ttys005    0:01.55 /bin/zsh -l
    
    Arguments: test
    Environment Variables: 
    SHELL=/bin/zsh
    TERM=xterm-256color
    ZSH=/Users/user/.oh-my-zsh
    USER=user
    PATH=/Users/user/.kube/plugins:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin
    PWD=/Users/user/Downloads
    LANG=en_US.UTF-8
    HOME=/Users/user
    LESS=-R
    LOGNAME=user
    SECURITYSESSIONID=user
    _=/usr/bin/env
    
    Client Version: 4.10.6
    

    For the above, a simple plugin demonstration is shown.

    Reference

    1. Getting started with the OpenShift CLI
    2. Extending the OpenShift CLI with plug-ins
    3. https://cloud.redhat.com/blog/augmenting-openshift-cli-with-plugins
    4. https://cloudcult.dev/tcpdump-for-openshift-workloads/
  • Learning Resources for Operators – First Two Weeks Notes

    To quote the Kubernetes website, “The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.” The following is an compendium to use while Learning Operators.

    The defacto SDK to use is the Operator SDK which provides HELM, Ansible and GO scaffolding to support your implementation of the Operator pattern.

    The following are education classes on the OperatorSDK

    When Running through the CO0201EN intermediate operators course, I did hit the case where I had to create a ClusterRole and ClusterRoleBinding for the ServiceAccount, here is a snippet that might helper others:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      namespace: memcached-operator-system
      name: service-reader-cr-mc
    rules:
    - apiGroups: ["cache.bastide.org"] # "" indicates the core API group
      resources: ["memcacheds"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      namespace: memcached-operator-system
      name: ext-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: service-reader-cr-mc
    subjects:
    - kind: ServiceAccount
      namespace: memcached-operator-system
      name: memcached-operator-controller-manager

    The reason for the above, I missed adding a kubebuilder declaration:

    //+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
    //+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch

    Thanks to https://stackoverflow.com/a/60334649/1873438

    The following are articles worth reviewing:

    The following are good Go resources:

    1. Go Code Comments – To write idiomatic Go, you should review the Code Review comments.
    2. Getting to Go: The Journey of Go’s Garbage Collector – The reference for Go and Garbage Collection in go
    3. An overview of memory management in Go – good overview of Go Memory Management
    4. Golang: Cost of using the heap – net 1M allocation seems to stay in the stack, outside it seems to be on the heap
    5. golangci-lint – The aggregated linters project is worthy of an installation and use. It’ll catch many issues and has a corresponding GitHub Action.
    6. Go in 3 Weeks A comprehensive training for Go. Companion to GitHub Repo
    7. Defensive Coding Guide: The Go Programming Language

    The following are good OpenShift resources:

    1. Create OpenShift Plugins – You must have a CLI plug-in file that begins with oc- or kubectl-. You create a file and put it in /usr/local/bin/
    2. Details on running Code Ready Containers on Linux – The key hack I learned awas to ssh -i ~/.crc/machines/crc/id_ecdsa core@<any host in the /etc/hosts>
      1. I ran on VirtualBox Ubuntu 20.04 with Guest Additions Installed
      2. Virtual Box Settings for the Machine – 6 CPU, 18G
        1. System > Processor > Enable PAE/NX and Enable Nested VT-X/AMD-V (which is a must for it to work)
        1. Network > Change Adapter Type to virtio-net and Set Promiscuous Mode to Allow VMS
      3. Install openssh-server so you can login remotely
      4. It will not install without a windowing system, so I have the default windowing environment installed.
      5. Note, I still get a failure on startup complaining about a timeout. I waited about 15 minutes post this, and the command oc get nodes –context admin –cluster crc –kubeconfig .crc/cache/crc_libvirt_4.10.3_amd64/kubeconfig now works.
    3. CRC virsh cheatsheet – If you are running Code Ready Containers and need to debug, you can use the virsh cheatsheet.
  • Digital Developer Conference – Hybrid Cloud: Integrating Healthcare Data in a Serverless World

    Recently I developed and presented this lab… which gets released in late September 2021.

    In this lab, developers integrate a healthcare data application using IBM FHIR Server with Red Hat OpenShift Serverless to create and respond to a healthcare scenario.

    This lab is a companion to the session Integrating Healthcare Data in a Serverless World at Digital Developer Conference – Hybrid Cloud.

    The content for this lab can be found at https://ibm.biz/ibm-fhir-server-healthcare-serverless.

    Have fun! Enjoy… Ask Questions… I’m here to help.

  • Playing with buildah and ubi-micro: Part 1

    buildah is an intriguing open source tool to build of Open Container Initiative (OCI) container images using a scripted approach versus a traditional Dockerfile. It’s fascinating and I’ve started to use podman and buildah to build my project’s images.

    I picked ubi-micro as my startingn point. Per Red Hat, ubi-microis the smallest possible image excludinng the package manager and all of its dependencies which are normally included in a container image. This approach is an alternative to the current release of the IBM FHIR Server image. The following only documents my first stages with Java testing.

    1. On Fedora, install the prerequisites.
    # sudo dnf install buildah -y
    Last metadata expiration check: 0:23:36 ago on Thu 02 Sep 2021 10:06:55 AM EDT.
    Dependencies resolved.
    =====================================================================================================================================================================
     Package                               Architecture                         Version                                      Repository                             Size
    =====================================================================================================================================================================
    Installing:
     buildah                               x86_64                               1.21.4-5.fc33                                updates                               7.9 M
    
    Transaction Summary
    =====================================================================================================================================================================
    Install  1 Package
    
    Total download size: 7.9 M
    Installed size: 29 M
    Downloading Packages:
    buildah-1.21.4-5.fc33.x86_64.rpm                                                                                                     7.2 MB/s | 7.9 MB     00:01
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Total                                                                                                                                6.2 MB/s | 7.9 MB     00:01
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                                                                                                             1/1
      Installing       : buildah-1.21.4-5.fc33.x86_64                                                                                                                1/1
      Running scriptlet: buildah-1.21.4-5.fc33.x86_64                                                                                                                1/1
      Verifying        : buildah-1.21.4-5.fc33.x86_64                                                                                                                1/1
    
    Installed:
      buildah-1.21.4-5.fc33.x86_64
    
    Complete!
    
    1. Start the new image
    # microcontainer=$(buildah from registry.access.redhat.com/ubi8/ubi-micro)
    Trying to pull registry.access.redhat.com/ubi8/ubi-micro:latest...
    Getting image source signatures
    Copying blob 4f4fb700ef54 done
    Copying blob 098a109c8679 done
    Copying config c5ba898d36 done
    Writing manifest to image destination
    Storing signatures
    
    1. Confirm the container name.
    # echo $microcontainer
    ubi-micro-working-container
    
    1. Mount the layer locally and display the path.
    # micromount=$(buildah mount $microcontainer)
    # echo $micromount
    /var/lib/containers/storage/overlay/14c524d6a5ef0e94887bc52685dbe911b40a5a9e39a6df00dc3b02e5f5ad7796/merged
    
    1. Setup the AdoptOpennJdk repository.
    cat <<'EOF' > $micromount/etc/yum.repos.d/adoptopenjdk.repo
    [AdoptOpenJDK]
    name=AdoptOpenJDK
    baseurl=http://adoptopenjdk.jfrog.io/adoptopenjdk/rpm/rhel/8/$basearch
    enabled=1
    gpgcheck=1
    gpgkey=https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public
    EOF
    
    1. Install to micromount without any ancillary dependencies.
    yum install \
        --installroot $micromount \
        --releasever 8 \
        --setopt install_weak_deps=false \
        --nodocs -y \
        adoptopenjdk-11-openj9xl.x86_64
    

    Results in:

    ------------------------------------------------------------------------------------------------------------------------------------
    Total                                                                                               8.9 MB/s | 193 MB     00:21
    warning: Found bdb Packages database while attempting sqlite backend: using bdb backend.
    warning: /var/lib/containers/storage/overlay/14c524d6a5ef0e94887bc52685dbe911b40a5a9e39a6df00dc3b02e5f5ad7796/merged/var/cache/dnf/AdoptOpenJDK-096a01411439d076/packages/adoptopenjdk-11-openj9xl-11.0.10+9.openj9-0.24.0-3.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 74885c03: NOKEY
    AdoptOpenJDK                                                                                         13 kB/s | 3.1 kB     00:00
    warning: Found bdb Packages database while attempting sqlite backend: using bdb backend.
    Importing GPG key 0x74885C03:
     Userid     : "AdoptOpenJDK (used for publishing RPM and DEB files) <adoptopenjdk@gmail.com>"
     Fingerprint: 8ED1 7AF5 D7E6 75EB 3EE3 BCE9 8AC3 B291 7488 5C03
     From       : https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public
    
    1. Clean up the dependencies
    # yum clean all \
     --installroot $micromount
    warning: Found bdb Packages database while attempting sqlite backend: using bdb backend.
    61 files removed
    
    1. Unmount the container
    buildah umount $microcontainer
    
    1. Coommit the image
    buildah commit $microcontainer ubi-micro-java
    
    1. Confirm the image
    # buildah images
    REPOSITORY                                  TAG        IMAGE ID       CREATED          SIZE
    localhost/ubi-micro-java                    latest     334404b8ebf2   22 seconds ago   43 MB
    

    It’s about 40M smaller than the ubi-minimal as it has no docs and ancillary dependencies.

    Tip: Starting with the IBM FHIR Server

    To start with the IBM FHIR Server image, you can use:

    buildah from --pull docker.io/ibmcom/ibm-fhir-server:latest
    
    [root@localhost ~]# buildah from --pull docker.io/ibmcom/ibm-fhir-server:latest
    Trying to pull docker.io/ibmcom/ibm-fhir-server:latest...
    Getting image source signatures
    Copying blob e2bef77118c7 done
    Copying blob 45cc8b7f2b43 done
    Copying blob 5627e846e80f done
    Copying blob 5f6bf015319e done
    Copying blob 87212cfd39ea done
    Copying blob b89ea354ae59 done
    Copying blob 4a939b72e1c6 done
    Copying blob d3cbf41efb4e done
    Copying blob 4feff1abc28e done
    Copying blob 9ff4465d271b done
    Copying blob 5e41012b4001 done
    Copying blob 410af8b678f6 done
    Copying blob 2f26dc40d01f done
    Copying blob 1415c9c2e161 done
    Copying blob e374de62001e done
    Copying blob 94d978ce0b1f done
    Copying blob 1fabae8675b6 done
    Copying blob 7b088cbebf16 done
    Copying blob 4167c1ebbd85 done
    Copying config 637552c186 done
    Writing manifest to image destination
    Storing signatures
    ibm-fhir-server-working-container
    

    Tip: Pullinng Fedora

    If you need to use Fedora, you can use fedora-minimal.

    # buildah from --pull registry.fedoraproject.org/fedora-minimal
    

    To remove the image

    $ podman image rm registry.fedoraproject.org/fedora-minimal:34
    

    Tip: Runnning with SELINUX

    If you are running with SELINUX, you should set specific selinux permissions.

    1. set the permission
    $ setsebool -P container_manage_cgroup 1
    
    1. Confirm the permission
    $ getsebool container_manage_cgroup
    container_manage_cgroup --> on
    

    References