Blog

  • Getting started with Multi-Arch Compute workloads with your Red Hat OpenShift cluster

    FYI: Webinar: Getting started with Multi-Arch Compute workloads with your Red Hat OpenShift cluster

    Summary



    The Red Hat OpenShift Container Platform runs on IBM Power systems, offering a secure and reliable foundation for modernizing applications and running containerized workloads.

Multi-Arch Compute for OpenShift Container Platform lets you use a pair of compute architectures such as, ppc64le and amd64, within a single cluster. This exciting feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures.

Join Paul Bastide,  IBM Senior Software Engineer, as he introduces the background behind Multi-Arch Compute and then gets you started setting up, configuring, and scheduling workloads. After, Paul will take you through a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.

This presentation sets the background and gets you started so you can set up, configure, and scheduling workloads. There will be a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.

    Please join me on 11 April 2024, 9:00 AM ET. Please share any questions by clicking on the Reply button. If you have not done so already, register here and download it to your calendar.

  • February 2024 Updates

    Here are some updates for February 2024

    Open Source Container images for Power now available in IBM Container Registry

    The Power team has added a new image:

    envoy1.29.0podman pull icr.io/ppc64le-oss/envoy-ppc64le:1.29.0Feb 7, 2024
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Kube-burner is a Kubernetes performance and scale test orchestration toolset. It provides multi-faceted functionality, the most important of which are summarized below. A new version v1.9.2 is released.

    https://github.com/kube-burner/kube-burner/tree/v1.9.2

    Looking to learn more about Multi-Arch Compute on IBM Power? The following blog details how to set up an IBM PowerVS Workspace to a IBM Cloud Virtual Private Cloud: https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/26/setting-up-an-ibm-powervs-workspace-to-a-ibm-cloud 

    #IBM #IBMPower #Power10 #PowerVS # #MultiArchCompute #PDeX

    Cert-manager is a cluster-wide service that provides application certificate lifecycle management. Learn how to use the cert-manager with your OpenShift cluster on IBM Power: https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/18/cert-manager-operator-for-red-hat-openshift-v113 

    #IBM #Power10 #IBMPower #RedHat #OpenShift #clusters #clustermanagement #PDeX

    FYI: How to visualize your OpenSCAP compliance reports Discover SCAPinoculars, a tool that helps you to visualize OpenSCAP reports, and the advantages it brings when used with the OpenShift Compliance Operator.

    https://developers.redhat.com/articles/2024/02/08/how-visualize-your-openscap-compliance-reports

    My colleague Yussuf cut a new release v6.0.0 of ocp4-upi-powervs Please be sure to pull the latest code and use it when appropriate.

    FYI: My colleague @Punith Kenchappa posted an article on configuring your Multi-Arch Compute Pods with NodeAffinity see Controlling Pod placement based on weighted node-affininty with your Multi-Arch Compute cluster. It’s super helpful for scheduling workloads across architecture types.

  • kube ns delete stuck in terminating

    Per https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=console-namespace-is-stuck-in-terminating-state, you can delete the Namespace stuck in the Terminating Phase.

    Recipe

    1. Grab the namespace json

    oc get namespace ma-operator -o json > tmp.json

    2. Edit tmp.json to remove the finalizer

    3. Start the proxy

    oc proxy &

    4. Delete the namespace

    curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/ma-operator/finalize
  • January 2023 – Lessons Learned

    For the month, I learned lots of things, and wanted to share them as part of snippets that you might find useful.

    Create a virtual server instance in IBM Power Virtual Server using Red Hat Ansible Automation Platform

    The Power Developer Exchange article dives into using the Red Hat Ansible Automation Platform and how to create PowerVS instances with Ansible. The collection is available at https://github.com/IBM-Cloud/ansible-collection-ibm

    Per the blog, you learn to start a sample controller UI and running some sample program such as hello_world.yaml playbook to say hello to Ansible. With Ansible the options are infinite, and there is always something more to explore. We would like to know how you are using this solution, so drop us a comment. 

    IBM Power Developer Exchange

    kube-burner is now a CNCF project

    kube-burner is a Kubernetes performance and scale test orchestration framework written in golang

    kube-burner

    Clock Drift Fix for Podman

    To update the default Podman-Machine:

    podman machine ssh --username root -- sed -i 's/^makestep\ .*$/makestep\ 1\ -1/' /etc/chrony.conf
    podman machine ssh --username root -- systemctl restart chronyd

    https://github.com/containers/podman/issues/11541#issuecomment-1416695974

    Advanced Cluster Manage cross Networks

    The cluster wasn’t getting loaded, so I checked the following…. and it pointed to an issue of a call back to a cluster inside my firewall setup. The klusterlet shows that it’s an issue with a callback.

    oc get pod -n open-cluster-management-agent


    ❯ oc get klusterlet klusterlet -oyaml
    Failed to create &SelfSubjectAccessReview{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:SelfSubjectAccessReviewSpec{ResourceAttributes:&ResourceAttributes{Namespace:,Verb:create,Group:cluster.open-cluster-management.io,Version:,Resource:managedclusters,Subresource:,Name:,},NonResourceAttributes:nil,},Status:SubjectAccessReviewStatus{Allowed:false,Reason:,EvaluationError:,Denied:false,},} with bootstrap secret “open-cluster-management-agent” “bootstrap-hub-kubeconfig”: Post “https://api.<XYZ>.com:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews”: dial tcp: lookup api.acmfunc.cp.fyre.ibm.com on 172.30.0.10:53: no such host

    Fun way to look at design

  • Setting up an IBM PowerVS Workspace to a IBM Cloud VPC

    As part of the Red Hat OpenShift Multi-Arch Compute effort, I’ve been working on Power and Intel Compute architecture pairs:

    1. Intel Control Plane with Power and Intel Compute
    2. Power Control Plane with Power and Intel Compute

    This article helps setup an IBM Cloud VPC with IBM Power Virtual Server, you can follow this recipe:

    1. Install ibmcloud cli curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
    2. Install the Power IAAS, Transit Gateway, Cloud Internet Services, and Infrastructure Service plugins ibmcloud plugin install power-iaas tg-cli vpc-infrastructure cis
    3. Login to ibmcloud cli ibmcloud login --apikey API_KEY -r us-east
    4. List the datacenters ibmcloud pi datacenters in our case we want wdc06
    5. List the resource group id ❯ ibmcloud resource group dev-resource-group
    ❯ ibmcloud resource group dev-resource-group
    Retrieving resource group dev-resource-group under account 555555555555555 as email@id.xyz...
    OK
    
                              
    Name:                     dev-resource-group
    Account ID:               555555555555555
    ID:                       44444444444444444
    Default Resource Group:   false
    State:                    ACTIVE
    
    1. Create a Workspace on a Power Edge Router enabled PowerVS zone. ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    ❯ ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    Creating workspace rdr-mac-p2-wdc06...
    
    Name       rdr-mac-p2-wdc06
    Plan ID    f165dd34-3a40-423b-9d95-e90a23f724dd
    
    1. Get the ID (2nd in response)
    ❯ ibmcloud pi workspaces 2>&1 | grep rdr-mac-p2-wdc06
    crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::     7777777-6666-5555-44444-1111111   rdr-mac-p2-wdc06
    
    1. Get the workspace, and check if it’s status is active
    ❯ ibmcloud pi workspace 7777777-6666-5555-44444-1111111 --json
    {
        "capabilities": {
            "cloud-connections": false,
            "power-edge-router": true,
            "power-vpn-connections": false,
            "transit-gateway-connection": false
        },
        "details": {
            "creationDate": "2024-01-24T20:52:59.178Z",
            "crn": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
            "powerEdgeRouter": {
                "state": "active",
                "type": "automated"
            }
        },
        "id": "7777777-6666-5555-44444-1111111",
        "location": {
            "region": "wdc06",
            "type": "data-center",
            "url": "https://us-east.power-iaas.cloud.ibm.com"
        },
        "name": "rdr-mac-p2-wdc06",
        "status": "active",
        "type": "off-premises"
    }
    
    1. Target the workspace
    ❯ ibmcloud pi service-target crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::
    Targeting service crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::...
    
    1. Create a Power Network using the CRN so there is an IP Range for the Power workers.
    ❯ ibmcloud pi network-create-private ocp-net --dns-servers 9.9.9.9 --jumbo --cidr-block 192.168.200.0/24 --gateway 192.168.200.1 --ip-range 192.168.200.10-192.168.200.250
    Creating network ocp-net under account Power Cloud - pcloudci as user email@id.xyz...
    Network ocp-net created.
                 
    ID           3e1add7e-1a12-4a50-9325-87f957b0cd63
    Name         ocp-net
    Type         vlan
    VLAN         797
    CIDR Block   192.168.200.0/24
    IP Range     [192.168.200.10 192.168.200.250]
    Gateway      192.168.200.1
    DNS          9.9.9.9, 161.26.0.10, 161.26.0.11
    
    1. Import the Centos8 stock image
    ❯ ibmcloud pi image-create CentOS-Stream-8       
    Creating new image from CentOS-Stream-8 under account Power Cloud - pcloudci as user email@id.xyz...
    Image created from CentOS-Stream-8.
                       
    Image              4904b3db-1dde-4f3c-a696-92f068816f6f
    Name               CentOS-Stream-8
    Arch               ppc64
    Container Format   bare
    Disk Format        raw
    Hypervisor         phyp
    Type               stock
    OS                 rhel
    Size               120
    Created            2024-01-24T21:00:29.000Z
    Last Updated       2024-01-24T21:00:29.000Z
    Description        
    Storage Type       
    Storage Pool    
    
    1. Find the closest location.
    ❯ ibmcloud tg locations
    Listing Transit Service locations under account Power Cloud - pcloudci as user email@id.xyz...
    OK
    Location   Location Type   Billing Location   
    eu-es      region          eu   
    eu-de      region          eu   
    au-syd     region          ap   
    eu-gb      region          eu   
    br-sao     region          br   
    jp-osa     region          ap   
    jp-tok     region          ap   
    ca-tor     region          ca   
    us-south   region          us   
    us-east    region          us   
    
    1. Create the Transit Gateway
    # ibmcloud tg gateway-create --name rdr-mac-p2-wdc06-tg --location us-east --routing global \
        --resource-group-id 44444444444444444 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "pending"
    }%   
    
    1. Wait until the transit gateway is available.
    ❯ ibmcloud tg gw 3333333-22222-1111-0000-dad4b38f5063 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "available"
    }
    
    1. Create a VPC with at least one subnet with a Public Gateway
    ibmcloud is vpc-create rdr-mac-p2-wdc06-vpc --resource-group-id 44444444444444444 --output JSON
    {
        "classic_access": false,
        "created_at": "2024-01-24T21:12:46.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "cse_source_ips": [
            {
                "ip": {
                    "address": "10.12.98.66"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
                    "name": "us-east-1"
                }
            },
            {
                "ip": {
                    "address": "10.12.108.205"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-2",
                    "name": "us-east-2"
                }
            },
            {
                "ip": {
                    "address": "10.22.56.222"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-3",
                    "name": "us-east-3"
                }
            }
        ],
        "default_network_acl": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::network-acl:r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/network_acls/r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "id": "r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "name": "causation-browse-capture-behind"
        },
        "default_routing_table": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333/routing_tables/r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "id": "r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "name": "retaining-acquaint-retiring-curry",
            "resource_type": "routing_table"
        },
        "default_security_group": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::security-group:r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/security_groups/r001-ffa5c27a-5f18-5f18-b679-4444444333",
            "id": "r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "name": "jailer-lurch-treasure-glacial"
        },
        "dns": {
            "enable_hub": false,
            "resolution_binding_count": 0,
            "resolver": {
                "servers": [
                    {
                        "address": "161.26.0.10"
                    },
                    {
                        "address": "161.26.0.11"
                    }
                ],
                "type": "system",
                "configuration": "default"
            }
        },
        "health_reasons": null,
        "health_state": "inapplicable",
        "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
        "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
        "name": "rdr-mac-p2-wdc06-vpc",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "vpc",
        "status": "pending"
    }
    
    1. Check the status is available
    ❯ ibmcloud is vpc rdr-mac-p2-wdc06-vpc --output json | jq -r '.status'
    available
    
    1. Add a subnet
    ❯ ibmcloud is subnet-create sn01 rdr-mac-p2-wdc06-vpc \
            --resource-group-id 44444444444444444 \
            --ipv4-address-count 256 --zone us-east-1   
    Creating subnet sn01 in resource group 44444444444444444 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      -   
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group      
    
    1. Attach a public gateway to the subnet
    ❯ ibmcloud is public-gateway-create gw01 rdr-mac-p2-wdc06-vpc us-east-1 \
            --resource-group-id 44444444444444444 \
            --output JSON
    {
        "created_at": "2024-01-24T21:21:18.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::public-gateway:r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "floating_ip": {
            "address": "150.239.80.219",
            "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::floating-ip:r001-022b865a-4674-4791-94f7-ee4fac646287",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/floating_ips/r001-022b865a-4674-4791-94f7-ee4fac646287",
            "id": "r001-022b865a-4674-4791-94f7-ee4fac646287",
            "name": "gw01"
        },
        "href": "https://us-east.iaas.cloud.ibm.com/v1/public_gateways/r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "id": "r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "name": "gw01",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "public_gateway",
        "status": "available",
        "vpc": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
            "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
            "name": "rdr-mac-p2-wdc06-vpc",
            "resource_type": "vpc"
        },
        "zone": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
            "name": "us-east-1"
        }
    }%
    
    1. Attach the Public Gateway to the Subnet
    ❯ ibmcloud is subnet-update sn01 --vpc rdr-mac-p2-wdc06-vpc \
            --pgw gw01
    Updating subnet sn01 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      ID                                          Name      
                        r001-f5f27e42-aed6-4b1a-b121-f234e5149416   gw01      
                           
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group    
    
    1. Attach the PER network to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name powervs-conn --network-id crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111:: --network-type power_virtual_server --output json
    
    {
        "created_at": "2024-01-25T00:37:37.364Z",
        "id": "75646025-3ea2-45e2-a5b3-36870a9de141",
        "name": "powervs-conn",
        "network_id": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
        "network_type": "power_virtual_server",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. You should see the status attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 75646025-3ea2-45e2-a5b3-36870a9de141 --output json | jq -r '.status'
    attached
    
    1. Attach the VPC to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name vpc-conn --network-id crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333 --network-type vpc --output json
    {
        "created_at": "2024-01-25T00:40:26.629Z",
        "id": "777777777-eef2-4a27-832d-6c80d2ac599f",
        "name": "vpc-conn",
        "network_id": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "network_type": "vpc",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. Check the status it should be attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 777777777-eef2-4a27-832d-6c80d2ac599f --output json | jq -r '.status'
    attached
    

    You now have a VPC and a Power Workspace connected. The next step is to setup the Security Groups to enable communication between subnets.

    More details to come and help your adoption of Multi-Arch Compute.

  • cert-manager Operator for Red Hat OpenShift v1.13

    The IBM Power development team is happy to introduce cert-manager Operator for Red Hat OpenShift on Power. cert-manager is a “cluster-wide service that provides application certificate lifecycle management”. This service manages certfificates and integration with external certificate authorities using Automated Certificate Management Environment (ACME).

    For v1.13, the release notes also tell you about the expanded support includes multiple architectures – AMD64, IBM Z® (s390x), IBM Power® (ppc64le) and ARM64 architectures.

    This is exciting and I’ll give you a flavor of how to use the cert-manager with your OpenShift cluster. I’ll demonstrate how to use Let’s Encrypt for the HTTP01 challenge type and IBM Cloud Internet Services paired with Let’s Encrypt for the DNS01 challenge type.

    This write up uses a 4.13 cluster on IBM PowerVS using ocp4-upi-powervs, the same steps apply to 4.14 and on-premises environments. To facilitate the HTTP01 challenge type, the IBM Cloud Services section is used:

    ### Using IBM Cloud Services
    use_ibm_cloud_services     = true
    ibm_cloud_vpc_name         = "rdr-cert-manager-vpc"
    ibm_cloud_vpc_subnet_name  = "sn-20231206-01"
    ibm_cloud_resource_group = "resource-group"
    iaas_vpc_region           = "au-syd"               # if empty, will default to ibmcloud_region.
    ibm_cloud_cis_crn         = "crn:v1:bluemix:public:internet-svcs:global:a/<account_id>:<cis_instance_id>::"
    ibm_cloud_tgw             = "rdr-sec-certman"  # Name of existing Transit Gateway where VPC and PowerVS targets are already added.
    

    This means you would have a CIS instance setup with a real domain linked. You would configure the IBM Cloud VPC to connect to the PowerVS workspace over a Transit Gateway. Ideally the connection uses the PER networking feature of PowerVS. This sets up a real hostname for the call back from Lets Encrypt and configures the Load Balancers which support port 80/443 traffic.

    To setup the cert-manager, login to the Web Console as an administrator.

    1. Click on Operators > OperatorHub
    2. Filter on cert-manager
    3. Select cert-manager for Red Hat OpenShift
    4. Click Install using the namespace provided
    5. Wait a few minutes for it to install.

    You now have a working cert-manager operator, and ready for the HTTP01 challenge type. For this, we switch to the commandline.

    1. Login via the commandline as a cluster-admin.
    2. Setup the letsencrypt-http01 Issuer
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: letsencrypt-http01
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: letsencrypt-staging
        solvers:
        - http01:
            ingress:
              class: openshift-default
    EOF
    

    Note, the above is a production letsencrypt, you could use staging. Be carefully how many certificates you create and what service you use, as there may be some rate limiting applied.

    1. Let’s create a certificate for my cluster which is hosted.
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-test-http01
    spec:
      dnsNames:
      - testa.$(oc project --short).apps.cm-4a41.domain.name
      issuerRef:
        name: letsencrypt-http01
      secretName: cert-test-http01b-sec
    EOF
    
    1. We can check the process using oc:
    #  oc get certificate,certificaterequest,order
    NAME                                         READY SECRET                AGE
    certificate.cert-manager.io/cert-test-http01 True  cert-test-dns01-b-sec 48m
    
    NAME                                                APPROVED DENIED READY ISSUER                                 REQUESTOR                                         AGE
    certificaterequest.cert-manager.io/cert-test-http01 True            True  letsencrypt-prody                      system:serviceaccount:cert-manager:cert-manager   25m
    
    NAME                                                     STATE   AGE
    order.acme.cert-manager.io/cert-test-http01-3937192702   valid   25m
    

    Once the order switches from Pending to valid, your certificate is now available in the secret.

    1. Get the certificate usinig the oc. You can also mount the secret or use the secret for the route
    oc get secret cert-test-http01b-sec -oyaml
    

    If you don’t have direct access to the internet, or the HTTP01 is not an option, you can use the cert-manager-webhook-ibmcis.

    1. Clone the repository git clone https://github.com/IBM/cert-manager-webhook-ibmcis.git
    2. Change to the directory cd cert-manager-webhook-ibmcis
    3. Create the webhook project oc new-project cert-manager-webhook-ibmcis
    4. Update the pod-security labels:
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/enforce=privileged --overwrite=true
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/audit=privileged --overwrite=true
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/warn=privileged --overwrite=true
    
    1. Create the ibmcis deployment oc apply -f cert-manager-webhook-ibmcis.yaml
    2. Once the pods are available and ready in the cert-manager-webhook-ibmcis, then we can proceed.
    3. Create the api-token. It is recommended you use a service id with specific access to your CIS instance.
    oc create secret generic ibmcis-credentials --from-literal=api-token="<YOUR API KEY>" 
    
    1. Retreive your CRN using the ibmcloud cli, and save the ID
    ❯ ibmcloud cis instances
    Retrieving service instances for service 'internet-svcs'
    OK
    Name                      ID                Location   State    Service Name
    mycis       crn:v1:bluemix:public:internet-svcs:global:a/<ACCOUNT_NUM>:<INSTANCE_ID>::   global     active   internet-svcs
    
    1. Create the ClusterIssuer, updating YOUR_EMAIL and the CIS ID.
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prody
    spec:
      acme:
        # The ACME server URL
        server: https://acme-v02.api.letsencrypt.org/directory
    
        # Email address used for ACME registration
        email: <YOUR_EMAIL>
    
        # Name of a secret used to store the ACME account private key
        privateKeySecretRef:
          name: letsencrypt-prod
    
        solvers:
        - dns01:
            webhook:
              groupName: acme.borup.work
              solverName: ibmcis
              config:
                apiKeySecretRef:
                  name: ibmcis-credentials
                  key: api-token
                cisCRN: 
                  - "crn:v1:bluemix:public:internet-svcs:global:a/<ACCOUNT_NUM>:<INSTANCE_ID>::"
    EOF
    
    1. Create the DNS01 Certificate
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-test-dns01-b
      namespace: cert-manager-webhook-ibmcis
    spec:
      commonName: "ts-a.cm-4a41.domain.name"
      dnsNames:
      - "ts-a.cm-4a41.domain.name"
      issuerRef:
        name: letsencrypt-prody
        kind: ClusterIssuer
      secretName: cert-test-dns01
    EOF
    
    1. Wait until your certificate is READY=True
    # oc get certificate
    NAME                                      READY   SECRET                                    AGE
    cert-test-dns01-b                         True    cert-test-dns01-b-sec                     75m
    

    You’ve seen how to use both challenge types CIS, Lets Encrypt, and are ready to go.

    Best wishes,

    The Dev Team

  • Multi-Arch Compute Node Selector

    Originally posted to Node Selector https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/09/multi-arch-compute-node-selector?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    The OpenShift Container Platform Multi-Arch Compute feature supports the pair of processor (ISA) architectures – ppc64le and amd64 in a cluster. With these pairs, there are various permutations when scheduling Pods. Fortunately, the platform has controls on where the work is scheduled in the cluster. One of these controls is called the node selector. This article outlines how to go about using Node Selectors at different levels – Pod, Project/Namespace, Cluster.

    Pod Level

    Per OpenShift 4.14: Placing pods on specific nodes using node selectors, a node selector is a map of key/value pairs to determine where the work is scheduled. The Pod nodeSelector values must be the same as the labels of a Node to be eligible for scheduling. If you need more advanced boolean logic, you may use affinity and antiaffinity rules. See Kubernetes: Affinity and anti-affinity

    Consider the Pod definition for test, the nodeSelector has x: y and is matched with a Node which is labeled with .metadata.labels

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        x: y
    

    You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. To direct a Pod to a Power node, you could use the kubernetes.io/arch: ppc64le label.

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        kubernetes.io/arch: ppc64le
    

    You can see where the Pod is scheduled using oc get pods -owide.

    ❯ oc get pods -owide
    NAME READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE   READINESS GATES
    test 1/1     Running   0          24d   10.130.2.9    mac-acca-worker-1 <none>           <none>
    

    You can confirm the architecture for each node oc get nodes mac-acca-worker-1 -owide. You’ll then see the uname is marked with ppc64le

    ❯ oc get nodes mac-acca-worker-1 -owide
    NAME                STATUS   ROLES    AGE   VERSION           INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                  CONTAINER-RUNTIME
    mac-acca-worker-1   Ready    worker   25d   v1.28.3+20a5764   192.168.200.11   <none>        Red Hat Enterprise Linux CoreOS 414.92.....   5.14.0-284.41.1.el9_2.ppc64le   cri-o://1.28.2-2.rhaos4.14.gite7be4e1.el9
    

    This approach applies to high-level Kubernetes abstractions such as ReplicaSets, Deployments or DaemonSets.

    Project / Namespace Level

    Per OpenShift 4.14: Creating project-wide node selectors, the control of Pod creation may not be available in the Project or Namespace. This behavior leaves the customer without control over Pod placement. Kubernetes and OpenShift provide control over Pod placement when the control over the Pod definition is not possible.

    Kubernetes enables this feature through the Namespace annotation scheduler.alpha.kubernetes.io/node-selector. You can read more about internal-behavior.

    You can annotate the namespace:

    oc annotate ns example scheduler.alpha.kubernetes.io/node-selector=kubernetes.io/arch=ppc64le
    

    OpenShift enables this feature through Namespace annotation.

    oc annotate ns example openshift.io/node-selector=kubernetes.io/arch=ppc64le
    

    These direct the Pod to the right node architecture.

    Cluster Level

    Per OpenShift 4.14: Creating default cluster-wide node selectors, the control of Pod creation may not be available or there is a need for a default. This customer controls Pod placement through a default setting of the cluster-wide default node selector.

    To configure the cluster-wide default, patch the Scheduler Operator custom resource (CR).

    oc patch Scheduler cluster --type=merge --patch '{"spec": { "defaultNodeSelector": "kubernetes.io/arch=ppc64le" } }'
    

    To direct scheduling to the other pair of architectures, you MUST define a nodeSelector to override the behavior.

    Summary

    You have seen how to control the distribution of work and how to schedule work with multiple architectures.

    In a future blog, I’ll cover Multiarch Manager Operator source which aims to aims to address problems and usability issues encountered when working with Openshift clusters with multi-architecture compute nodes.

  • Multi Arch Compute OpenShift Container Platform (OCP) cluster on IBM Power 

    Following the release of Red Hat OpenShift 4.14, clients can run x86 and IBM Power Worker Nodes in the same OpenShift Container Platform Cluster with Multi-Architecture Compute. A study compared the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster with a cluster exclusively built on IBM Power architecture. Findings revealed that performance had no significant impact with or without Multi Arch Compute. Click here to learn more about the study and the results found. 

    Watch the Red Hat OpenShift Multi-Arch Introduction Video to learn how, why, and when to add Power to your x86 OpenShift cluster.   

    Watch the OpenShift Multi-Arch Sock Shop Demonstration Video deploying the open-source Sock Shop e-commerce solution using a mix of x86 and Power Worker Nodes with Red Hat OpenShift Multi-Arch to further your understanding. 

  • Awesome Notes – 11/28

    Here are some great resources for OpenShift Container Platform on Power:

    UKI Brunch & Learn – Red Hat OpenShift – Multi-Architecture Compute

    Glad to see the Multiarchitecture Compute with an Intel Control Plane and Power worker in all its glory. Thanks to Paul Chapman

    https://www.linkedin.com/posts/chapmanp_uki-brunch-learn-red-hat-openshift-activity-7133370146890375168-AmuL?utm_source=share&utm_medium=member_desktop

    Explore Multi Arch Compute in OpenShift cluster with IBM Power systems

    In the ever-evolving landscape of computing, the quest for optimal performance and adaptability remains constant. This study delves into the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster, comparing it with a cluster exclusively built on IBM Power architecture. Our findings reveal that, with or without Multi Arch Compute, there is no significant impact on performance.

    Thanks to @Mel from the IBM Power Systems Performance Team

    https://community.ibm.com/community/user/powerdeveloper/blogs/mel-bakhshi/2023/11/28/explore-mac-ocp-on-power

    Enabling FIPS Compliance in Openshift Cluster Platform on Power

    A new PDEX blog is posted to help the technical experts configure their OpenShift Container Platform on Power and the necessary background to configure FIPS 140-2 compliance.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/enabling-fips-compliance-in-openshift-cluster-plat?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting etcd data on OpenShift Container Platform on Power

    This article was originally posted to Medium by Gaurav Bankar and has been updated.

    And now is posted with updated details for 4.14.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-etcd-data-on-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Using TLS Security Profiles on OpenShift Container Platform on IBM Power

    This article identifies using cluster operators and components with TLS Security profiles, covers the available security profiles, and how to configure each profile, and verify each profile is properly enabled.

    https://community.ibm.com/community/user/powerdeveloper/communities/community-home/recent-community-blogs?communitykey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting disks on OpenShift Container Platform on Power Systems

    This document outlines the concepts, how to setup an external tang cluster on IBM PowerVS, how to setup a cluster on IBM PowerVS and how to confirm the encrypted disk setup.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-disks-on-openshift-container-platform-o?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Configuring a PCI-DSS compliant OpenShift Container Platform cluster on IBM Power

    This article outlines how to verify the profiles, check for the scan results, and configure a compliant cluster.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/configuring-a-pci-dss-compliant-openshift-containe?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Open Source Container images for Power now available in IBM Container Registry

    The OpenSource team has posted new images:

    grafana-mimir-build-image2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-build-image-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-continuous-test2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-continuous-test-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-rules-action2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-rules-action-ppc64le:2.9.0Nov 24, 2023
    grafana-mimirtool2.9.0docker pull icr.io/ppc64le-oss/grafana-mimirtool-ppc64le:2.9.0Nov 24, 2023
    grafana-query-tee2.9.0docker pull icr.io/ppc64le-oss/grafana-query-tee-ppc64le:2.9.0Nov 24, 2023
    filebrowserv2.24.2docker pull icr.io/ppc64le-oss/filebrowser-ppc64le:v2.24.2Nov 24, 2023
    neo4j5.9.0docker pull icr.io/ppc64le-oss/neo4j-ppc64le:5.9.0Nov 24, 2023
    kong3.3.0docker pull icr.io/ppc64le-oss/kong-ppc64le:3.3.0Nov 24, 2023
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Multi-arch build pipelines for Power: Automating multi-arch image builds

    Multi-arch build pipelines can greatly reduce the complexity of supporting multiple operating systems and architectures. Notably, images built on the Power architecture can seamlessly be supported by other architectures, and vice versa, amplifying the versatility and impact of your applications. Furthermore, automating the processes using various CI tools, not only accelerates the creation of multi-arch images but also ensures consistency, reliability, and ease of integration into diverse software environments.

    Building on our exploration of multi-arch pipelines for IBM Power in the first blog, this blog delves into the next frontier: Automation. Automating multi-arch image builds using Continuous Integration (CI) tools has become essential in modern software development. This process allows developers to efficiently create and maintain container images that can run on various CPU architectures, such as IBM Power (ppc64le), x86 (amd64), or ARM ensuring compatibility across diverse hardware environments.

    Part 1 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/multi-arch-pipelines-for-ibm-power Part 2 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/automating-multi-arch-image-builds-for-power

  • Quay.io now available on IBM Power Systems

    Thanks to the RH and Power Team and Yussuf in particular – IBM Power now has quay.io install-run support.

    Red Hat Quay is a distributed, highly available, security-focused, and scalable private image registry platform that enables you to build, organize, distribute, and deploy containers for your enterprise. It provides a single and resilient content repository for delivering containerized software to development and production across Red Hat OpenShift and Kubernetes clusters.

    Now, Red Hat Quay is available on IBM Power with version 3.10. Read the official Red Hat Quay 3.10 blog and for more information visit the Red Hat Quay Documentation page.

    https://community.ibm.com/community/user/powerdeveloper/blogs/yussuf-shaikh/2023/11/07/quay-on-power