Tag: openshift

  • kube ns delete stuck in terminating

    Per https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=console-namespace-is-stuck-in-terminating-state, you can delete the Namespace stuck in the Terminating Phase.

    Recipe

    1. Grab the namespace json

    oc get namespace ma-operator -o json > tmp.json

    2. Edit tmp.json to remove the finalizer

    3. Start the proxy

    oc proxy &

    4. Delete the namespace

    curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/ma-operator/finalize
  • January 2023 – Lessons Learned

    For the month, I learned lots of things, and wanted to share them as part of snippets that you might find useful.

    Create a virtual server instance in IBM Power Virtual Server using Red Hat Ansible Automation Platform

    The Power Developer Exchange article dives into using the Red Hat Ansible Automation Platform and how to create PowerVS instances with Ansible. The collection is available at https://github.com/IBM-Cloud/ansible-collection-ibm

    Per the blog, you learn to start a sample controller UI and running some sample program such as hello_world.yaml playbook to say hello to Ansible. With Ansible the options are infinite, and there is always something more to explore. We would like to know how you are using this solution, so drop us a comment. 

    IBM Power Developer Exchange

    kube-burner is now a CNCF project

    kube-burner is a Kubernetes performance and scale test orchestration framework written in golang

    kube-burner

    Clock Drift Fix for Podman

    To update the default Podman-Machine:

    podman machine ssh --username root -- sed -i 's/^makestep\ .*$/makestep\ 1\ -1/' /etc/chrony.conf
    podman machine ssh --username root -- systemctl restart chronyd

    https://github.com/containers/podman/issues/11541#issuecomment-1416695974

    Advanced Cluster Manage cross Networks

    The cluster wasn’t getting loaded, so I checked the following…. and it pointed to an issue of a call back to a cluster inside my firewall setup. The klusterlet shows that it’s an issue with a callback.

    oc get pod -n open-cluster-management-agent


    ❯ oc get klusterlet klusterlet -oyaml
    Failed to create &SelfSubjectAccessReview{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:SelfSubjectAccessReviewSpec{ResourceAttributes:&ResourceAttributes{Namespace:,Verb:create,Group:cluster.open-cluster-management.io,Version:,Resource:managedclusters,Subresource:,Name:,},NonResourceAttributes:nil,},Status:SubjectAccessReviewStatus{Allowed:false,Reason:,EvaluationError:,Denied:false,},} with bootstrap secret “open-cluster-management-agent” “bootstrap-hub-kubeconfig”: Post “https://api.<XYZ>.com:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews”: dial tcp: lookup api.acmfunc.cp.fyre.ibm.com on 172.30.0.10:53: no such host

    Fun way to look at design

  • Setting up an IBM PowerVS Workspace to a IBM Cloud VPC

    As part of the Red Hat OpenShift Multi-Arch Compute effort, I’ve been working on Power and Intel Compute architecture pairs:

    1. Intel Control Plane with Power and Intel Compute
    2. Power Control Plane with Power and Intel Compute

    This article helps setup an IBM Cloud VPC with IBM Power Virtual Server, you can follow this recipe:

    1. Install ibmcloud cli curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
    2. Install the Power IAAS, Transit Gateway, Cloud Internet Services, and Infrastructure Service plugins ibmcloud plugin install power-iaas tg-cli vpc-infrastructure cis
    3. Login to ibmcloud cli ibmcloud login --apikey API_KEY -r us-east
    4. List the datacenters ibmcloud pi datacenters in our case we want wdc06
    5. List the resource group id ❯ ibmcloud resource group dev-resource-group
    ❯ ibmcloud resource group dev-resource-group
    Retrieving resource group dev-resource-group under account 555555555555555 as email@id.xyz...
    OK
    
                              
    Name:                     dev-resource-group
    Account ID:               555555555555555
    ID:                       44444444444444444
    Default Resource Group:   false
    State:                    ACTIVE
    
    1. Create a Workspace on a Power Edge Router enabled PowerVS zone. ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    ❯ ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    Creating workspace rdr-mac-p2-wdc06...
    
    Name       rdr-mac-p2-wdc06
    Plan ID    f165dd34-3a40-423b-9d95-e90a23f724dd
    
    1. Get the ID (2nd in response)
    ❯ ibmcloud pi workspaces 2>&1 | grep rdr-mac-p2-wdc06
    crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::     7777777-6666-5555-44444-1111111   rdr-mac-p2-wdc06
    
    1. Get the workspace, and check if it’s status is active
    ❯ ibmcloud pi workspace 7777777-6666-5555-44444-1111111 --json
    {
        "capabilities": {
            "cloud-connections": false,
            "power-edge-router": true,
            "power-vpn-connections": false,
            "transit-gateway-connection": false
        },
        "details": {
            "creationDate": "2024-01-24T20:52:59.178Z",
            "crn": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
            "powerEdgeRouter": {
                "state": "active",
                "type": "automated"
            }
        },
        "id": "7777777-6666-5555-44444-1111111",
        "location": {
            "region": "wdc06",
            "type": "data-center",
            "url": "https://us-east.power-iaas.cloud.ibm.com"
        },
        "name": "rdr-mac-p2-wdc06",
        "status": "active",
        "type": "off-premises"
    }
    
    1. Target the workspace
    ❯ ibmcloud pi service-target crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::
    Targeting service crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::...
    
    1. Create a Power Network using the CRN so there is an IP Range for the Power workers.
    ❯ ibmcloud pi network-create-private ocp-net --dns-servers 9.9.9.9 --jumbo --cidr-block 192.168.200.0/24 --gateway 192.168.200.1 --ip-range 192.168.200.10-192.168.200.250
    Creating network ocp-net under account Power Cloud - pcloudci as user email@id.xyz...
    Network ocp-net created.
                 
    ID           3e1add7e-1a12-4a50-9325-87f957b0cd63
    Name         ocp-net
    Type         vlan
    VLAN         797
    CIDR Block   192.168.200.0/24
    IP Range     [192.168.200.10 192.168.200.250]
    Gateway      192.168.200.1
    DNS          9.9.9.9, 161.26.0.10, 161.26.0.11
    
    1. Import the Centos8 stock image
    ❯ ibmcloud pi image-create CentOS-Stream-8       
    Creating new image from CentOS-Stream-8 under account Power Cloud - pcloudci as user email@id.xyz...
    Image created from CentOS-Stream-8.
                       
    Image              4904b3db-1dde-4f3c-a696-92f068816f6f
    Name               CentOS-Stream-8
    Arch               ppc64
    Container Format   bare
    Disk Format        raw
    Hypervisor         phyp
    Type               stock
    OS                 rhel
    Size               120
    Created            2024-01-24T21:00:29.000Z
    Last Updated       2024-01-24T21:00:29.000Z
    Description        
    Storage Type       
    Storage Pool    
    
    1. Find the closest location.
    ❯ ibmcloud tg locations
    Listing Transit Service locations under account Power Cloud - pcloudci as user email@id.xyz...
    OK
    Location   Location Type   Billing Location   
    eu-es      region          eu   
    eu-de      region          eu   
    au-syd     region          ap   
    eu-gb      region          eu   
    br-sao     region          br   
    jp-osa     region          ap   
    jp-tok     region          ap   
    ca-tor     region          ca   
    us-south   region          us   
    us-east    region          us   
    
    1. Create the Transit Gateway
    # ibmcloud tg gateway-create --name rdr-mac-p2-wdc06-tg --location us-east --routing global \
        --resource-group-id 44444444444444444 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "pending"
    }%   
    
    1. Wait until the transit gateway is available.
    ❯ ibmcloud tg gw 3333333-22222-1111-0000-dad4b38f5063 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "available"
    }
    
    1. Create a VPC with at least one subnet with a Public Gateway
    ibmcloud is vpc-create rdr-mac-p2-wdc06-vpc --resource-group-id 44444444444444444 --output JSON
    {
        "classic_access": false,
        "created_at": "2024-01-24T21:12:46.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "cse_source_ips": [
            {
                "ip": {
                    "address": "10.12.98.66"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
                    "name": "us-east-1"
                }
            },
            {
                "ip": {
                    "address": "10.12.108.205"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-2",
                    "name": "us-east-2"
                }
            },
            {
                "ip": {
                    "address": "10.22.56.222"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-3",
                    "name": "us-east-3"
                }
            }
        ],
        "default_network_acl": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::network-acl:r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/network_acls/r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "id": "r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "name": "causation-browse-capture-behind"
        },
        "default_routing_table": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333/routing_tables/r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "id": "r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "name": "retaining-acquaint-retiring-curry",
            "resource_type": "routing_table"
        },
        "default_security_group": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::security-group:r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/security_groups/r001-ffa5c27a-5f18-5f18-b679-4444444333",
            "id": "r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "name": "jailer-lurch-treasure-glacial"
        },
        "dns": {
            "enable_hub": false,
            "resolution_binding_count": 0,
            "resolver": {
                "servers": [
                    {
                        "address": "161.26.0.10"
                    },
                    {
                        "address": "161.26.0.11"
                    }
                ],
                "type": "system",
                "configuration": "default"
            }
        },
        "health_reasons": null,
        "health_state": "inapplicable",
        "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
        "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
        "name": "rdr-mac-p2-wdc06-vpc",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "vpc",
        "status": "pending"
    }
    
    1. Check the status is available
    ❯ ibmcloud is vpc rdr-mac-p2-wdc06-vpc --output json | jq -r '.status'
    available
    
    1. Add a subnet
    ❯ ibmcloud is subnet-create sn01 rdr-mac-p2-wdc06-vpc \
            --resource-group-id 44444444444444444 \
            --ipv4-address-count 256 --zone us-east-1   
    Creating subnet sn01 in resource group 44444444444444444 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      -   
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group      
    
    1. Attach a public gateway to the subnet
    ❯ ibmcloud is public-gateway-create gw01 rdr-mac-p2-wdc06-vpc us-east-1 \
            --resource-group-id 44444444444444444 \
            --output JSON
    {
        "created_at": "2024-01-24T21:21:18.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::public-gateway:r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "floating_ip": {
            "address": "150.239.80.219",
            "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::floating-ip:r001-022b865a-4674-4791-94f7-ee4fac646287",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/floating_ips/r001-022b865a-4674-4791-94f7-ee4fac646287",
            "id": "r001-022b865a-4674-4791-94f7-ee4fac646287",
            "name": "gw01"
        },
        "href": "https://us-east.iaas.cloud.ibm.com/v1/public_gateways/r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "id": "r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "name": "gw01",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "public_gateway",
        "status": "available",
        "vpc": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
            "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
            "name": "rdr-mac-p2-wdc06-vpc",
            "resource_type": "vpc"
        },
        "zone": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
            "name": "us-east-1"
        }
    }%
    
    1. Attach the Public Gateway to the Subnet
    ❯ ibmcloud is subnet-update sn01 --vpc rdr-mac-p2-wdc06-vpc \
            --pgw gw01
    Updating subnet sn01 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      ID                                          Name      
                        r001-f5f27e42-aed6-4b1a-b121-f234e5149416   gw01      
                           
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group    
    
    1. Attach the PER network to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name powervs-conn --network-id crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111:: --network-type power_virtual_server --output json
    
    {
        "created_at": "2024-01-25T00:37:37.364Z",
        "id": "75646025-3ea2-45e2-a5b3-36870a9de141",
        "name": "powervs-conn",
        "network_id": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
        "network_type": "power_virtual_server",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. You should see the status attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 75646025-3ea2-45e2-a5b3-36870a9de141 --output json | jq -r '.status'
    attached
    
    1. Attach the VPC to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name vpc-conn --network-id crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333 --network-type vpc --output json
    {
        "created_at": "2024-01-25T00:40:26.629Z",
        "id": "777777777-eef2-4a27-832d-6c80d2ac599f",
        "name": "vpc-conn",
        "network_id": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "network_type": "vpc",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. Check the status it should be attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 777777777-eef2-4a27-832d-6c80d2ac599f --output json | jq -r '.status'
    attached
    

    You now have a VPC and a Power Workspace connected. The next step is to setup the Security Groups to enable communication between subnets.

    More details to come and help your adoption of Multi-Arch Compute.

  • Multi-Arch Compute Node Selector

    Originally posted to Node Selector https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/09/multi-arch-compute-node-selector?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    The OpenShift Container Platform Multi-Arch Compute feature supports the pair of processor (ISA) architectures – ppc64le and amd64 in a cluster. With these pairs, there are various permutations when scheduling Pods. Fortunately, the platform has controls on where the work is scheduled in the cluster. One of these controls is called the node selector. This article outlines how to go about using Node Selectors at different levels – Pod, Project/Namespace, Cluster.

    Pod Level

    Per OpenShift 4.14: Placing pods on specific nodes using node selectors, a node selector is a map of key/value pairs to determine where the work is scheduled. The Pod nodeSelector values must be the same as the labels of a Node to be eligible for scheduling. If you need more advanced boolean logic, you may use affinity and antiaffinity rules. See Kubernetes: Affinity and anti-affinity

    Consider the Pod definition for test, the nodeSelector has x: y and is matched with a Node which is labeled with .metadata.labels

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        x: y
    

    You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. To direct a Pod to a Power node, you could use the kubernetes.io/arch: ppc64le label.

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        kubernetes.io/arch: ppc64le
    

    You can see where the Pod is scheduled using oc get pods -owide.

    ❯ oc get pods -owide
    NAME READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE   READINESS GATES
    test 1/1     Running   0          24d   10.130.2.9    mac-acca-worker-1 <none>           <none>
    

    You can confirm the architecture for each node oc get nodes mac-acca-worker-1 -owide. You’ll then see the uname is marked with ppc64le

    ❯ oc get nodes mac-acca-worker-1 -owide
    NAME                STATUS   ROLES    AGE   VERSION           INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                  CONTAINER-RUNTIME
    mac-acca-worker-1   Ready    worker   25d   v1.28.3+20a5764   192.168.200.11   <none>        Red Hat Enterprise Linux CoreOS 414.92.....   5.14.0-284.41.1.el9_2.ppc64le   cri-o://1.28.2-2.rhaos4.14.gite7be4e1.el9
    

    This approach applies to high-level Kubernetes abstractions such as ReplicaSets, Deployments or DaemonSets.

    Project / Namespace Level

    Per OpenShift 4.14: Creating project-wide node selectors, the control of Pod creation may not be available in the Project or Namespace. This behavior leaves the customer without control over Pod placement. Kubernetes and OpenShift provide control over Pod placement when the control over the Pod definition is not possible.

    Kubernetes enables this feature through the Namespace annotation scheduler.alpha.kubernetes.io/node-selector. You can read more about internal-behavior.

    You can annotate the namespace:

    oc annotate ns example scheduler.alpha.kubernetes.io/node-selector=kubernetes.io/arch=ppc64le
    

    OpenShift enables this feature through Namespace annotation.

    oc annotate ns example openshift.io/node-selector=kubernetes.io/arch=ppc64le
    

    These direct the Pod to the right node architecture.

    Cluster Level

    Per OpenShift 4.14: Creating default cluster-wide node selectors, the control of Pod creation may not be available or there is a need for a default. This customer controls Pod placement through a default setting of the cluster-wide default node selector.

    To configure the cluster-wide default, patch the Scheduler Operator custom resource (CR).

    oc patch Scheduler cluster --type=merge --patch '{"spec": { "defaultNodeSelector": "kubernetes.io/arch=ppc64le" } }'
    

    To direct scheduling to the other pair of architectures, you MUST define a nodeSelector to override the behavior.

    Summary

    You have seen how to control the distribution of work and how to schedule work with multiple architectures.

    In a future blog, I’ll cover Multiarch Manager Operator source which aims to aims to address problems and usability issues encountered when working with Openshift clusters with multi-architecture compute nodes.

  • Multi Arch Compute OpenShift Container Platform (OCP) cluster on IBM Power 

    Following the release of Red Hat OpenShift 4.14, clients can run x86 and IBM Power Worker Nodes in the same OpenShift Container Platform Cluster with Multi-Architecture Compute. A study compared the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster with a cluster exclusively built on IBM Power architecture. Findings revealed that performance had no significant impact with or without Multi Arch Compute. Click here to learn more about the study and the results found. 

    Watch the Red Hat OpenShift Multi-Arch Introduction Video to learn how, why, and when to add Power to your x86 OpenShift cluster.   

    Watch the OpenShift Multi-Arch Sock Shop Demonstration Video deploying the open-source Sock Shop e-commerce solution using a mix of x86 and Power Worker Nodes with Red Hat OpenShift Multi-Arch to further your understanding. 

  • Awesome Notes – 11/28

    Here are some great resources for OpenShift Container Platform on Power:

    UKI Brunch & Learn – Red Hat OpenShift – Multi-Architecture Compute

    Glad to see the Multiarchitecture Compute with an Intel Control Plane and Power worker in all its glory. Thanks to Paul Chapman

    https://www.linkedin.com/posts/chapmanp_uki-brunch-learn-red-hat-openshift-activity-7133370146890375168-AmuL?utm_source=share&utm_medium=member_desktop

    Explore Multi Arch Compute in OpenShift cluster with IBM Power systems

    In the ever-evolving landscape of computing, the quest for optimal performance and adaptability remains constant. This study delves into the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster, comparing it with a cluster exclusively built on IBM Power architecture. Our findings reveal that, with or without Multi Arch Compute, there is no significant impact on performance.

    Thanks to @Mel from the IBM Power Systems Performance Team

    https://community.ibm.com/community/user/powerdeveloper/blogs/mel-bakhshi/2023/11/28/explore-mac-ocp-on-power

    Enabling FIPS Compliance in Openshift Cluster Platform on Power

    A new PDEX blog is posted to help the technical experts configure their OpenShift Container Platform on Power and the necessary background to configure FIPS 140-2 compliance.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/enabling-fips-compliance-in-openshift-cluster-plat?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting etcd data on OpenShift Container Platform on Power

    This article was originally posted to Medium by Gaurav Bankar and has been updated.

    And now is posted with updated details for 4.14.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-etcd-data-on-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Using TLS Security Profiles on OpenShift Container Platform on IBM Power

    This article identifies using cluster operators and components with TLS Security profiles, covers the available security profiles, and how to configure each profile, and verify each profile is properly enabled.

    https://community.ibm.com/community/user/powerdeveloper/communities/community-home/recent-community-blogs?communitykey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting disks on OpenShift Container Platform on Power Systems

    This document outlines the concepts, how to setup an external tang cluster on IBM PowerVS, how to setup a cluster on IBM PowerVS and how to confirm the encrypted disk setup.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-disks-on-openshift-container-platform-o?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Configuring a PCI-DSS compliant OpenShift Container Platform cluster on IBM Power

    This article outlines how to verify the profiles, check for the scan results, and configure a compliant cluster.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/configuring-a-pci-dss-compliant-openshift-containe?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Open Source Container images for Power now available in IBM Container Registry

    The OpenSource team has posted new images:

    grafana-mimir-build-image2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-build-image-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-continuous-test2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-continuous-test-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-rules-action2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-rules-action-ppc64le:2.9.0Nov 24, 2023
    grafana-mimirtool2.9.0docker pull icr.io/ppc64le-oss/grafana-mimirtool-ppc64le:2.9.0Nov 24, 2023
    grafana-query-tee2.9.0docker pull icr.io/ppc64le-oss/grafana-query-tee-ppc64le:2.9.0Nov 24, 2023
    filebrowserv2.24.2docker pull icr.io/ppc64le-oss/filebrowser-ppc64le:v2.24.2Nov 24, 2023
    neo4j5.9.0docker pull icr.io/ppc64le-oss/neo4j-ppc64le:5.9.0Nov 24, 2023
    kong3.3.0docker pull icr.io/ppc64le-oss/kong-ppc64le:3.3.0Nov 24, 2023
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Multi-arch build pipelines for Power: Automating multi-arch image builds

    Multi-arch build pipelines can greatly reduce the complexity of supporting multiple operating systems and architectures. Notably, images built on the Power architecture can seamlessly be supported by other architectures, and vice versa, amplifying the versatility and impact of your applications. Furthermore, automating the processes using various CI tools, not only accelerates the creation of multi-arch images but also ensures consistency, reliability, and ease of integration into diverse software environments.

    Building on our exploration of multi-arch pipelines for IBM Power in the first blog, this blog delves into the next frontier: Automation. Automating multi-arch image builds using Continuous Integration (CI) tools has become essential in modern software development. This process allows developers to efficiently create and maintain container images that can run on various CPU architectures, such as IBM Power (ppc64le), x86 (amd64), or ARM ensuring compatibility across diverse hardware environments.

    Part 1 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/multi-arch-pipelines-for-ibm-power Part 2 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/automating-multi-arch-image-builds-for-power

  • Quay.io now available on IBM Power Systems

    Thanks to the RH and Power Team and Yussuf in particular – IBM Power now has quay.io install-run support.

    Red Hat Quay is a distributed, highly available, security-focused, and scalable private image registry platform that enables you to build, organize, distribute, and deploy containers for your enterprise. It provides a single and resilient content repository for delivering containerized software to development and production across Red Hat OpenShift and Kubernetes clusters.

    Now, Red Hat Quay is available on IBM Power with version 3.10. Read the official Red Hat Quay 3.10 blog and for more information visit the Red Hat Quay Documentation page.

    https://community.ibm.com/community/user/powerdeveloper/blogs/yussuf-shaikh/2023/11/07/quay-on-power
  • Useful Notes for September and October 2023

    Hi everyone, I’ve been heads down working on Multiarchitecture Compute and the Power platform for IBM.

    How to add /etc/hosts file entries in OpenShift containers

    You can add host aliases into the Pod Definition which is handy if the code is hard coded with a DNS entry.

          hostAliases:
          - ip: "127.0.0.1"
            hostnames:
            - "home"
         - ip: "10.1.x.x"
            hostnames:
            - "remote-host"
    https://access.redhat.com/solutions/3696301

    Infrastructure Nodes in OpenShift 4

    A link to Infra nodes which provide a specific role in the cluster.

    https://access.redhat.com/solutions/5034771

    Multiarchitecture Compute Research

    Calling all IBM Power customers looking to impact Power modernization capabilities. The IBM Power Design Team is facilitating a study to understand customer sentiment toward Multi-Architecture Computing (MAC) and needs your help.

    https://community.ibm.com/community/user/powerdeveloper/blogs/erica-albert/2023/10/11/multi-architecture-computing-research-recruit 

    This is an interesting opportunity to work with customers on IBM Power and OpenShift as they mix the architecture workloads to meet their needs.

  • Protected: Webinar: Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers

    This content is password protected. To view it please enter your password below:

  • Krew plugin on ppc64le

    Posted to https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/19/kubernetes-krew-plugin-support-for-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Hey everyone,

    Krew, the kubectl plugin package manager, is now available on Power. The release v0.4.4 has a ppc64le download. You can download and start taking advantage of the krew plugin list. ppc64le download It also works with OpenShift.

    The Krew website has a list of plugins](https://krew.sigs.k8s.io/plugins/). Not all of the plugins support ppc64le, however may are cross-arch scripts and are cross compiled such as view-utilization.

    To take advantage of Krew with OpenShift, here are a few steps

    1. Download the krew-linux plugin
    # curl -L -O https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100 3977k  100 3977k    0     0  6333k      0 --:--:-- --:--:-- --:--:-- 30.5M
    
    1. Extract the krew plugin
    tar xvf krew-linux_ppc64le.tar.gz 
    ./LICENSE
    ./krew-linux_ppc64le
    
    1. Move to the /usr/bin so it’s picked up by oc.
    mv krew-linux_ppc64le /usr/bin/kubectl-krew
    
    1. Update the krew plugin
    # kubectl krew update
    WARNING: To be able to run kubectl plugins, you need to add
    the following to your ~/.bash_profile or ~/.bashrc:
    
        export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
    
    and restart your shell.
    
    Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git.
    Updated the local copy of plugin index.
    
    1. Update your shell:
    # echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
    
    1. Restart your session (exit and come back to the shell so the variables are loaded)
    2. Try oc krew list
    # oc krew list
    PLUGIN  VERSION
    
    1. List all the plugins that support ppc64le.
    # oc krew search  | grep -v 'unavailable on linux/ppc64le'
    NAME                            DESCRIPTION                                         INSTALLED
    allctx                          Run commands on contexts in your kubeconfig         no
    assert                          Assert Kubernetes resources                         no
    bulk-action                     Do bulk actions on Kubernetes resources.            no
    ...
    tmux-exec                       An exec multiplexer using Tmux                      no
    view-utilization                Shows cluster cpu and memory utilization            no
    
    1. Install a plugin
    # oc krew install view-utilization
    Updated the local copy of plugin index.
    Installing plugin: view-utilization
    Installed plugin: view-utilization
    \
     | Use this plugin:
     |      kubectl view-utilization
     | Documentation:
     |      https://github.com/etopeter/kubectl-view-utilization
     | Caveats:
     | \
     |  | This plugin needs the following programs:
     |  | * bash
     |  | * awk (gawk,mawk,awk)
     | /
    /
    WARNING: You installed plugin "view-utilization" from the krew-index plugin repository.
       These plugins are not audited for security by the Krew maintainers.
       Run them at your own risk.
    
    1. Use the plugin.
    # oc view-utilization
    Resource     Requests  %Requests      Limits  %Limits  Allocatable  Schedulable         Free
    CPU              7521         16        2400        5        45000        37479        37479
    Memory    33477885952         36  3774873600        4  92931489792  59453603840  59453603840
    

    Tip: There are many more plugins with ppc64le support and do not have the krew manifest updated.

    Thanks to PR 755 we have support for ppc64le.

    References

    https://github.com/kubernetes-sigs/krew/blob/v0.4.4/README.md

    https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz