Category: OpenShift

  • Setting up an IBM PowerVS Workspace to a IBM Cloud VPC

    As part of the Red Hat OpenShift Multi-Arch Compute effort, I’ve been working on Power and Intel Compute architecture pairs:

    1. Intel Control Plane with Power and Intel Compute
    2. Power Control Plane with Power and Intel Compute

    This article helps setup an IBM Cloud VPC with IBM Power Virtual Server, you can follow this recipe:

    1. Install ibmcloud cli curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
    2. Install the Power IAAS, Transit Gateway, Cloud Internet Services, and Infrastructure Service plugins ibmcloud plugin install power-iaas tg-cli vpc-infrastructure cis
    3. Login to ibmcloud cli ibmcloud login --apikey API_KEY -r us-east
    4. List the datacenters ibmcloud pi datacenters in our case we want wdc06
    5. List the resource group id ❯ ibmcloud resource group dev-resource-group
    ❯ ibmcloud resource group dev-resource-group
    Retrieving resource group dev-resource-group under account 555555555555555 as email@id.xyz...
    OK
    
                              
    Name:                     dev-resource-group
    Account ID:               555555555555555
    ID:                       44444444444444444
    Default Resource Group:   false
    State:                    ACTIVE
    
    1. Create a Workspace on a Power Edge Router enabled PowerVS zone. ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    ❯ ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    Creating workspace rdr-mac-p2-wdc06...
    
    Name       rdr-mac-p2-wdc06
    Plan ID    f165dd34-3a40-423b-9d95-e90a23f724dd
    
    1. Get the ID (2nd in response)
    ❯ ibmcloud pi workspaces 2>&1 | grep rdr-mac-p2-wdc06
    crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::     7777777-6666-5555-44444-1111111   rdr-mac-p2-wdc06
    
    1. Get the workspace, and check if it’s status is active
    ❯ ibmcloud pi workspace 7777777-6666-5555-44444-1111111 --json
    {
        "capabilities": {
            "cloud-connections": false,
            "power-edge-router": true,
            "power-vpn-connections": false,
            "transit-gateway-connection": false
        },
        "details": {
            "creationDate": "2024-01-24T20:52:59.178Z",
            "crn": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
            "powerEdgeRouter": {
                "state": "active",
                "type": "automated"
            }
        },
        "id": "7777777-6666-5555-44444-1111111",
        "location": {
            "region": "wdc06",
            "type": "data-center",
            "url": "https://us-east.power-iaas.cloud.ibm.com"
        },
        "name": "rdr-mac-p2-wdc06",
        "status": "active",
        "type": "off-premises"
    }
    
    1. Target the workspace
    ❯ ibmcloud pi service-target crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::
    Targeting service crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::...
    
    1. Create a Power Network using the CRN so there is an IP Range for the Power workers.
    ❯ ibmcloud pi network-create-private ocp-net --dns-servers 9.9.9.9 --jumbo --cidr-block 192.168.200.0/24 --gateway 192.168.200.1 --ip-range 192.168.200.10-192.168.200.250
    Creating network ocp-net under account Power Cloud - pcloudci as user email@id.xyz...
    Network ocp-net created.
                 
    ID           3e1add7e-1a12-4a50-9325-87f957b0cd63
    Name         ocp-net
    Type         vlan
    VLAN         797
    CIDR Block   192.168.200.0/24
    IP Range     [192.168.200.10 192.168.200.250]
    Gateway      192.168.200.1
    DNS          9.9.9.9, 161.26.0.10, 161.26.0.11
    
    1. Import the Centos8 stock image
    ❯ ibmcloud pi image-create CentOS-Stream-8       
    Creating new image from CentOS-Stream-8 under account Power Cloud - pcloudci as user email@id.xyz...
    Image created from CentOS-Stream-8.
                       
    Image              4904b3db-1dde-4f3c-a696-92f068816f6f
    Name               CentOS-Stream-8
    Arch               ppc64
    Container Format   bare
    Disk Format        raw
    Hypervisor         phyp
    Type               stock
    OS                 rhel
    Size               120
    Created            2024-01-24T21:00:29.000Z
    Last Updated       2024-01-24T21:00:29.000Z
    Description        
    Storage Type       
    Storage Pool    
    
    1. Find the closest location.
    ❯ ibmcloud tg locations
    Listing Transit Service locations under account Power Cloud - pcloudci as user email@id.xyz...
    OK
    Location   Location Type   Billing Location   
    eu-es      region          eu   
    eu-de      region          eu   
    au-syd     region          ap   
    eu-gb      region          eu   
    br-sao     region          br   
    jp-osa     region          ap   
    jp-tok     region          ap   
    ca-tor     region          ca   
    us-south   region          us   
    us-east    region          us   
    
    1. Create the Transit Gateway
    # ibmcloud tg gateway-create --name rdr-mac-p2-wdc06-tg --location us-east --routing global \
        --resource-group-id 44444444444444444 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "pending"
    }%   
    
    1. Wait until the transit gateway is available.
    ❯ ibmcloud tg gw 3333333-22222-1111-0000-dad4b38f5063 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "available"
    }
    
    1. Create a VPC with at least one subnet with a Public Gateway
    ibmcloud is vpc-create rdr-mac-p2-wdc06-vpc --resource-group-id 44444444444444444 --output JSON
    {
        "classic_access": false,
        "created_at": "2024-01-24T21:12:46.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "cse_source_ips": [
            {
                "ip": {
                    "address": "10.12.98.66"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
                    "name": "us-east-1"
                }
            },
            {
                "ip": {
                    "address": "10.12.108.205"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-2",
                    "name": "us-east-2"
                }
            },
            {
                "ip": {
                    "address": "10.22.56.222"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-3",
                    "name": "us-east-3"
                }
            }
        ],
        "default_network_acl": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::network-acl:r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/network_acls/r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "id": "r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "name": "causation-browse-capture-behind"
        },
        "default_routing_table": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333/routing_tables/r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "id": "r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "name": "retaining-acquaint-retiring-curry",
            "resource_type": "routing_table"
        },
        "default_security_group": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::security-group:r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/security_groups/r001-ffa5c27a-5f18-5f18-b679-4444444333",
            "id": "r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "name": "jailer-lurch-treasure-glacial"
        },
        "dns": {
            "enable_hub": false,
            "resolution_binding_count": 0,
            "resolver": {
                "servers": [
                    {
                        "address": "161.26.0.10"
                    },
                    {
                        "address": "161.26.0.11"
                    }
                ],
                "type": "system",
                "configuration": "default"
            }
        },
        "health_reasons": null,
        "health_state": "inapplicable",
        "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
        "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
        "name": "rdr-mac-p2-wdc06-vpc",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "vpc",
        "status": "pending"
    }
    
    1. Check the status is available
    ❯ ibmcloud is vpc rdr-mac-p2-wdc06-vpc --output json | jq -r '.status'
    available
    
    1. Add a subnet
    ❯ ibmcloud is subnet-create sn01 rdr-mac-p2-wdc06-vpc \
            --resource-group-id 44444444444444444 \
            --ipv4-address-count 256 --zone us-east-1   
    Creating subnet sn01 in resource group 44444444444444444 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      -   
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group      
    
    1. Attach a public gateway to the subnet
    ❯ ibmcloud is public-gateway-create gw01 rdr-mac-p2-wdc06-vpc us-east-1 \
            --resource-group-id 44444444444444444 \
            --output JSON
    {
        "created_at": "2024-01-24T21:21:18.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::public-gateway:r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "floating_ip": {
            "address": "150.239.80.219",
            "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::floating-ip:r001-022b865a-4674-4791-94f7-ee4fac646287",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/floating_ips/r001-022b865a-4674-4791-94f7-ee4fac646287",
            "id": "r001-022b865a-4674-4791-94f7-ee4fac646287",
            "name": "gw01"
        },
        "href": "https://us-east.iaas.cloud.ibm.com/v1/public_gateways/r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "id": "r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "name": "gw01",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "public_gateway",
        "status": "available",
        "vpc": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
            "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
            "name": "rdr-mac-p2-wdc06-vpc",
            "resource_type": "vpc"
        },
        "zone": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
            "name": "us-east-1"
        }
    }%
    
    1. Attach the Public Gateway to the Subnet
    ❯ ibmcloud is subnet-update sn01 --vpc rdr-mac-p2-wdc06-vpc \
            --pgw gw01
    Updating subnet sn01 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      ID                                          Name      
                        r001-f5f27e42-aed6-4b1a-b121-f234e5149416   gw01      
                           
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group    
    
    1. Attach the PER network to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name powervs-conn --network-id crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111:: --network-type power_virtual_server --output json
    
    {
        "created_at": "2024-01-25T00:37:37.364Z",
        "id": "75646025-3ea2-45e2-a5b3-36870a9de141",
        "name": "powervs-conn",
        "network_id": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
        "network_type": "power_virtual_server",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. You should see the status attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 75646025-3ea2-45e2-a5b3-36870a9de141 --output json | jq -r '.status'
    attached
    
    1. Attach the VPC to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name vpc-conn --network-id crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333 --network-type vpc --output json
    {
        "created_at": "2024-01-25T00:40:26.629Z",
        "id": "777777777-eef2-4a27-832d-6c80d2ac599f",
        "name": "vpc-conn",
        "network_id": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "network_type": "vpc",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. Check the status it should be attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 777777777-eef2-4a27-832d-6c80d2ac599f --output json | jq -r '.status'
    attached
    

    You now have a VPC and a Power Workspace connected. The next step is to setup the Security Groups to enable communication between subnets.

    More details to come and help your adoption of Multi-Arch Compute.

  • cert-manager Operator for Red Hat OpenShift v1.13

    The IBM Power development team is happy to introduce cert-manager Operator for Red Hat OpenShift on Power. cert-manager is a “cluster-wide service that provides application certificate lifecycle management”. This service manages certfificates and integration with external certificate authorities using Automated Certificate Management Environment (ACME).

    For v1.13, the release notes also tell you about the expanded support includes multiple architectures – AMD64, IBM Z® (s390x), IBM Power® (ppc64le) and ARM64 architectures.

    This is exciting and I’ll give you a flavor of how to use the cert-manager with your OpenShift cluster. I’ll demonstrate how to use Let’s Encrypt for the HTTP01 challenge type and IBM Cloud Internet Services paired with Let’s Encrypt for the DNS01 challenge type.

    This write up uses a 4.13 cluster on IBM PowerVS using ocp4-upi-powervs, the same steps apply to 4.14 and on-premises environments. To facilitate the HTTP01 challenge type, the IBM Cloud Services section is used:

    ### Using IBM Cloud Services
    use_ibm_cloud_services     = true
    ibm_cloud_vpc_name         = "rdr-cert-manager-vpc"
    ibm_cloud_vpc_subnet_name  = "sn-20231206-01"
    ibm_cloud_resource_group = "resource-group"
    iaas_vpc_region           = "au-syd"               # if empty, will default to ibmcloud_region.
    ibm_cloud_cis_crn         = "crn:v1:bluemix:public:internet-svcs:global:a/<account_id>:<cis_instance_id>::"
    ibm_cloud_tgw             = "rdr-sec-certman"  # Name of existing Transit Gateway where VPC and PowerVS targets are already added.
    

    This means you would have a CIS instance setup with a real domain linked. You would configure the IBM Cloud VPC to connect to the PowerVS workspace over a Transit Gateway. Ideally the connection uses the PER networking feature of PowerVS. This sets up a real hostname for the call back from Lets Encrypt and configures the Load Balancers which support port 80/443 traffic.

    To setup the cert-manager, login to the Web Console as an administrator.

    1. Click on Operators > OperatorHub
    2. Filter on cert-manager
    3. Select cert-manager for Red Hat OpenShift
    4. Click Install using the namespace provided
    5. Wait a few minutes for it to install.

    You now have a working cert-manager operator, and ready for the HTTP01 challenge type. For this, we switch to the commandline.

    1. Login via the commandline as a cluster-admin.
    2. Setup the letsencrypt-http01 Issuer
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: letsencrypt-http01
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: letsencrypt-staging
        solvers:
        - http01:
            ingress:
              class: openshift-default
    EOF
    

    Note, the above is a production letsencrypt, you could use staging. Be carefully how many certificates you create and what service you use, as there may be some rate limiting applied.

    1. Let’s create a certificate for my cluster which is hosted.
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-test-http01
    spec:
      dnsNames:
      - testa.$(oc project --short).apps.cm-4a41.domain.name
      issuerRef:
        name: letsencrypt-http01
      secretName: cert-test-http01b-sec
    EOF
    
    1. We can check the process using oc:
    #  oc get certificate,certificaterequest,order
    NAME                                         READY SECRET                AGE
    certificate.cert-manager.io/cert-test-http01 True  cert-test-dns01-b-sec 48m
    
    NAME                                                APPROVED DENIED READY ISSUER                                 REQUESTOR                                         AGE
    certificaterequest.cert-manager.io/cert-test-http01 True            True  letsencrypt-prody                      system:serviceaccount:cert-manager:cert-manager   25m
    
    NAME                                                     STATE   AGE
    order.acme.cert-manager.io/cert-test-http01-3937192702   valid   25m
    

    Once the order switches from Pending to valid, your certificate is now available in the secret.

    1. Get the certificate usinig the oc. You can also mount the secret or use the secret for the route
    oc get secret cert-test-http01b-sec -oyaml
    

    If you don’t have direct access to the internet, or the HTTP01 is not an option, you can use the cert-manager-webhook-ibmcis.

    1. Clone the repository git clone https://github.com/IBM/cert-manager-webhook-ibmcis.git
    2. Change to the directory cd cert-manager-webhook-ibmcis
    3. Create the webhook project oc new-project cert-manager-webhook-ibmcis
    4. Update the pod-security labels:
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/enforce=privileged --overwrite=true
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/audit=privileged --overwrite=true
    oc label namespace/cert-manager-webhook-ibmcis pod-security.kubernetes.io/warn=privileged --overwrite=true
    
    1. Create the ibmcis deployment oc apply -f cert-manager-webhook-ibmcis.yaml
    2. Once the pods are available and ready in the cert-manager-webhook-ibmcis, then we can proceed.
    3. Create the api-token. It is recommended you use a service id with specific access to your CIS instance.
    oc create secret generic ibmcis-credentials --from-literal=api-token="<YOUR API KEY>" 
    
    1. Retreive your CRN using the ibmcloud cli, and save the ID
    ❯ ibmcloud cis instances
    Retrieving service instances for service 'internet-svcs'
    OK
    Name                      ID                Location   State    Service Name
    mycis       crn:v1:bluemix:public:internet-svcs:global:a/<ACCOUNT_NUM>:<INSTANCE_ID>::   global     active   internet-svcs
    
    1. Create the ClusterIssuer, updating YOUR_EMAIL and the CIS ID.
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prody
    spec:
      acme:
        # The ACME server URL
        server: https://acme-v02.api.letsencrypt.org/directory
    
        # Email address used for ACME registration
        email: <YOUR_EMAIL>
    
        # Name of a secret used to store the ACME account private key
        privateKeySecretRef:
          name: letsencrypt-prod
    
        solvers:
        - dns01:
            webhook:
              groupName: acme.borup.work
              solverName: ibmcis
              config:
                apiKeySecretRef:
                  name: ibmcis-credentials
                  key: api-token
                cisCRN: 
                  - "crn:v1:bluemix:public:internet-svcs:global:a/<ACCOUNT_NUM>:<INSTANCE_ID>::"
    EOF
    
    1. Create the DNS01 Certificate
    cat << EOF | oc apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-test-dns01-b
      namespace: cert-manager-webhook-ibmcis
    spec:
      commonName: "ts-a.cm-4a41.domain.name"
      dnsNames:
      - "ts-a.cm-4a41.domain.name"
      issuerRef:
        name: letsencrypt-prody
        kind: ClusterIssuer
      secretName: cert-test-dns01
    EOF
    
    1. Wait until your certificate is READY=True
    # oc get certificate
    NAME                                      READY   SECRET                                    AGE
    cert-test-dns01-b                         True    cert-test-dns01-b-sec                     75m
    

    You’ve seen how to use both challenge types CIS, Lets Encrypt, and are ready to go.

    Best wishes,

    The Dev Team

  • Multi-Arch Compute Node Selector

    Originally posted to Node Selector https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/09/multi-arch-compute-node-selector?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    The OpenShift Container Platform Multi-Arch Compute feature supports the pair of processor (ISA) architectures – ppc64le and amd64 in a cluster. With these pairs, there are various permutations when scheduling Pods. Fortunately, the platform has controls on where the work is scheduled in the cluster. One of these controls is called the node selector. This article outlines how to go about using Node Selectors at different levels – Pod, Project/Namespace, Cluster.

    Pod Level

    Per OpenShift 4.14: Placing pods on specific nodes using node selectors, a node selector is a map of key/value pairs to determine where the work is scheduled. The Pod nodeSelector values must be the same as the labels of a Node to be eligible for scheduling. If you need more advanced boolean logic, you may use affinity and antiaffinity rules. See Kubernetes: Affinity and anti-affinity

    Consider the Pod definition for test, the nodeSelector has x: y and is matched with a Node which is labeled with .metadata.labels

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        x: y
    

    You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. To direct a Pod to a Power node, you could use the kubernetes.io/arch: ppc64le label.

    apiVersion: v1
    kind: Pod
    metadata:
      name: test
    spec:
      containers:
        - name: test
          image: "ocp-power.xyz/test:v0.0.1"
      nodeSelector:
        kubernetes.io/arch: ppc64le
    

    You can see where the Pod is scheduled using oc get pods -owide.

    ❯ oc get pods -owide
    NAME READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODE   READINESS GATES
    test 1/1     Running   0          24d   10.130.2.9    mac-acca-worker-1 <none>           <none>
    

    You can confirm the architecture for each node oc get nodes mac-acca-worker-1 -owide. You’ll then see the uname is marked with ppc64le

    ❯ oc get nodes mac-acca-worker-1 -owide
    NAME                STATUS   ROLES    AGE   VERSION           INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                  CONTAINER-RUNTIME
    mac-acca-worker-1   Ready    worker   25d   v1.28.3+20a5764   192.168.200.11   <none>        Red Hat Enterprise Linux CoreOS 414.92.....   5.14.0-284.41.1.el9_2.ppc64le   cri-o://1.28.2-2.rhaos4.14.gite7be4e1.el9
    

    This approach applies to high-level Kubernetes abstractions such as ReplicaSets, Deployments or DaemonSets.

    Project / Namespace Level

    Per OpenShift 4.14: Creating project-wide node selectors, the control of Pod creation may not be available in the Project or Namespace. This behavior leaves the customer without control over Pod placement. Kubernetes and OpenShift provide control over Pod placement when the control over the Pod definition is not possible.

    Kubernetes enables this feature through the Namespace annotation scheduler.alpha.kubernetes.io/node-selector. You can read more about internal-behavior.

    You can annotate the namespace:

    oc annotate ns example scheduler.alpha.kubernetes.io/node-selector=kubernetes.io/arch=ppc64le
    

    OpenShift enables this feature through Namespace annotation.

    oc annotate ns example openshift.io/node-selector=kubernetes.io/arch=ppc64le
    

    These direct the Pod to the right node architecture.

    Cluster Level

    Per OpenShift 4.14: Creating default cluster-wide node selectors, the control of Pod creation may not be available or there is a need for a default. This customer controls Pod placement through a default setting of the cluster-wide default node selector.

    To configure the cluster-wide default, patch the Scheduler Operator custom resource (CR).

    oc patch Scheduler cluster --type=merge --patch '{"spec": { "defaultNodeSelector": "kubernetes.io/arch=ppc64le" } }'
    

    To direct scheduling to the other pair of architectures, you MUST define a nodeSelector to override the behavior.

    Summary

    You have seen how to control the distribution of work and how to schedule work with multiple architectures.

    In a future blog, I’ll cover Multiarch Manager Operator source which aims to aims to address problems and usability issues encountered when working with Openshift clusters with multi-architecture compute nodes.

  • Multi Arch Compute OpenShift Container Platform (OCP) cluster on IBM Power 

    Following the release of Red Hat OpenShift 4.14, clients can run x86 and IBM Power Worker Nodes in the same OpenShift Container Platform Cluster with Multi-Architecture Compute. A study compared the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster with a cluster exclusively built on IBM Power architecture. Findings revealed that performance had no significant impact with or without Multi Arch Compute. Click here to learn more about the study and the results found. 

    Watch the Red Hat OpenShift Multi-Arch Introduction Video to learn how, why, and when to add Power to your x86 OpenShift cluster.   

    Watch the OpenShift Multi-Arch Sock Shop Demonstration Video deploying the open-source Sock Shop e-commerce solution using a mix of x86 and Power Worker Nodes with Red Hat OpenShift Multi-Arch to further your understanding. 

  • Awesome Notes – 11/28

    Here are some great resources for OpenShift Container Platform on Power:

    UKI Brunch & Learn – Red Hat OpenShift – Multi-Architecture Compute

    Glad to see the Multiarchitecture Compute with an Intel Control Plane and Power worker in all its glory. Thanks to Paul Chapman

    https://www.linkedin.com/posts/chapmanp_uki-brunch-learn-red-hat-openshift-activity-7133370146890375168-AmuL?utm_source=share&utm_medium=member_desktop

    Explore Multi Arch Compute in OpenShift cluster with IBM Power systems

    In the ever-evolving landscape of computing, the quest for optimal performance and adaptability remains constant. This study delves into the performance implications of deploying applications on a Multi Arch Compute OpenShift Container Platform (OCP) cluster, comparing it with a cluster exclusively built on IBM Power architecture. Our findings reveal that, with or without Multi Arch Compute, there is no significant impact on performance.

    Thanks to @Mel from the IBM Power Systems Performance Team

    https://community.ibm.com/community/user/powerdeveloper/blogs/mel-bakhshi/2023/11/28/explore-mac-ocp-on-power

    Enabling FIPS Compliance in Openshift Cluster Platform on Power

    A new PDEX blog is posted to help the technical experts configure their OpenShift Container Platform on Power and the necessary background to configure FIPS 140-2 compliance.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/enabling-fips-compliance-in-openshift-cluster-plat?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting etcd data on OpenShift Container Platform on Power

    This article was originally posted to Medium by Gaurav Bankar and has been updated.

    And now is posted with updated details for 4.14.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-etcd-data-on-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Using TLS Security Profiles on OpenShift Container Platform on IBM Power

    This article identifies using cluster operators and components with TLS Security profiles, covers the available security profiles, and how to configure each profile, and verify each profile is properly enabled.

    https://community.ibm.com/community/user/powerdeveloper/communities/community-home/recent-community-blogs?communitykey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Encrypting disks on OpenShift Container Platform on Power Systems

    This document outlines the concepts, how to setup an external tang cluster on IBM PowerVS, how to setup a cluster on IBM PowerVS and how to confirm the encrypted disk setup.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/encrypting-disks-on-openshift-container-platform-o?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Configuring a PCI-DSS compliant OpenShift Container Platform cluster on IBM Power

    This article outlines how to verify the profiles, check for the scan results, and configure a compliant cluster.

    https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/11/21/configuring-a-pci-dss-compliant-openshift-containe?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Open Source Container images for Power now available in IBM Container Registry

    The OpenSource team has posted new images:

    grafana-mimir-build-image2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-build-image-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-continuous-test2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-continuous-test-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-ppc64le:2.9.0Nov 24, 2023
    grafana-mimir-rules-action2.9.0docker pull icr.io/ppc64le-oss/grafana-mimir-rules-action-ppc64le:2.9.0Nov 24, 2023
    grafana-mimirtool2.9.0docker pull icr.io/ppc64le-oss/grafana-mimirtool-ppc64le:2.9.0Nov 24, 2023
    grafana-query-tee2.9.0docker pull icr.io/ppc64le-oss/grafana-query-tee-ppc64le:2.9.0Nov 24, 2023
    filebrowserv2.24.2docker pull icr.io/ppc64le-oss/filebrowser-ppc64le:v2.24.2Nov 24, 2023
    neo4j5.9.0docker pull icr.io/ppc64le-oss/neo4j-ppc64le:5.9.0Nov 24, 2023
    kong3.3.0docker pull icr.io/ppc64le-oss/kong-ppc64le:3.3.0Nov 24, 2023
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Multi-arch build pipelines for Power: Automating multi-arch image builds

    Multi-arch build pipelines can greatly reduce the complexity of supporting multiple operating systems and architectures. Notably, images built on the Power architecture can seamlessly be supported by other architectures, and vice versa, amplifying the versatility and impact of your applications. Furthermore, automating the processes using various CI tools, not only accelerates the creation of multi-arch images but also ensures consistency, reliability, and ease of integration into diverse software environments.

    Building on our exploration of multi-arch pipelines for IBM Power in the first blog, this blog delves into the next frontier: Automation. Automating multi-arch image builds using Continuous Integration (CI) tools has become essential in modern software development. This process allows developers to efficiently create and maintain container images that can run on various CPU architectures, such as IBM Power (ppc64le), x86 (amd64), or ARM ensuring compatibility across diverse hardware environments.

    Part 1 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/multi-arch-pipelines-for-ibm-power Part 2 https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/11/27/automating-multi-arch-image-builds-for-power

  • Quay.io now available on IBM Power Systems

    Thanks to the RH and Power Team and Yussuf in particular – IBM Power now has quay.io install-run support.

    Red Hat Quay is a distributed, highly available, security-focused, and scalable private image registry platform that enables you to build, organize, distribute, and deploy containers for your enterprise. It provides a single and resilient content repository for delivering containerized software to development and production across Red Hat OpenShift and Kubernetes clusters.

    Now, Red Hat Quay is available on IBM Power with version 3.10. Read the official Red Hat Quay 3.10 blog and for more information visit the Red Hat Quay Documentation page.

    https://community.ibm.com/community/user/powerdeveloper/blogs/yussuf-shaikh/2023/11/07/quay-on-power
  • Useful Notes for September and October 2023

    Hi everyone, I’ve been heads down working on Multiarchitecture Compute and the Power platform for IBM.

    How to add /etc/hosts file entries in OpenShift containers

    You can add host aliases into the Pod Definition which is handy if the code is hard coded with a DNS entry.

          hostAliases:
          - ip: "127.0.0.1"
            hostnames:
            - "home"
         - ip: "10.1.x.x"
            hostnames:
            - "remote-host"
    https://access.redhat.com/solutions/3696301

    Infrastructure Nodes in OpenShift 4

    A link to Infra nodes which provide a specific role in the cluster.

    https://access.redhat.com/solutions/5034771

    Multiarchitecture Compute Research

    Calling all IBM Power customers looking to impact Power modernization capabilities. The IBM Power Design Team is facilitating a study to understand customer sentiment toward Multi-Architecture Computing (MAC) and needs your help.

    https://community.ibm.com/community/user/powerdeveloper/blogs/erica-albert/2023/10/11/multi-architecture-computing-research-recruit 

    This is an interesting opportunity to work with customers on IBM Power and OpenShift as they mix the architecture workloads to meet their needs.

  • Weekly Notes

    Here are my weekly notes:

    Flow Connector

    If you are using the VPC, you can track connections between your subnets and your VPC using Flow Connector.

    ❯ find . -name “*.gz” -exec gunzip {} \;

    ❯ grep -Rh 192.168.200.10 | jq -r ‘.flow_logs[] | select(.action == “rejected”) | “\(.initiator_ip),\(.target_ip),\(.target_port)”‘ | sort -u | grep 192.168.200.10

    10.245.0.5,192.168.200.10,36416,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,36430,2023-08-08T14:31:32Z

    10.245.0.5,192.168.200.10,58894,2023-08-08T14:31:32Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:41Z

    10.245.1.5,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.1.5,192.168.200.10,9100,2023-08-08T14:31:32Z

    10.245.129.4,192.168.200.10,43524,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:32Z

    10.245.64.4,192.168.200.10,10250,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9100,2023-08-08T14:31:42Z

    10.245.64.4,192.168.200.10,9537,2023-08-08T14:50:36Z

    Image Pruner Reports Error….

    You can check the image-registry status on the cluster operator.

    ❯ oc get co image-registry
    image-registry                             4.14.0-ec.4   True        False         True       3d14h   ImagePrunerDegraded: Job has reached the specified backoff limit
    

    The cronjob probably failed, so we can check that it exists.

    ❯ oc get cronjob -n openshift-image-registry
    NAME           SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
    image-pruner   0 0 * * *   False     0        16h             3d15h
    

    We can run a one-off to clear the status above.

    ❯ oc create job --from=cronjob/image-pruner one-off-image-pruner -n openshift-image-registry
    job.batch/one-off-image-pruner created
    

    Then your image-registry should be a-ok.

    Ref: https://gist.github.com/ryderdamen/73ff9f93cd61d5dd45a0c50032e3ae03

  • Protected: Webinar: Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers

    This content is password protected. To view it please enter your password below:

  • Krew plugin on ppc64le

    Posted to https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/19/kubernetes-krew-plugin-support-for-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Hey everyone,

    Krew, the kubectl plugin package manager, is now available on Power. The release v0.4.4 has a ppc64le download. You can download and start taking advantage of the krew plugin list. ppc64le download It also works with OpenShift.

    The Krew website has a list of plugins](https://krew.sigs.k8s.io/plugins/). Not all of the plugins support ppc64le, however may are cross-arch scripts and are cross compiled such as view-utilization.

    To take advantage of Krew with OpenShift, here are a few steps

    1. Download the krew-linux plugin
    # curl -L -O https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100 3977k  100 3977k    0     0  6333k      0 --:--:-- --:--:-- --:--:-- 30.5M
    
    1. Extract the krew plugin
    tar xvf krew-linux_ppc64le.tar.gz 
    ./LICENSE
    ./krew-linux_ppc64le
    
    1. Move to the /usr/bin so it’s picked up by oc.
    mv krew-linux_ppc64le /usr/bin/kubectl-krew
    
    1. Update the krew plugin
    # kubectl krew update
    WARNING: To be able to run kubectl plugins, you need to add
    the following to your ~/.bash_profile or ~/.bashrc:
    
        export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
    
    and restart your shell.
    
    Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git.
    Updated the local copy of plugin index.
    
    1. Update your shell:
    # echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
    
    1. Restart your session (exit and come back to the shell so the variables are loaded)
    2. Try oc krew list
    # oc krew list
    PLUGIN  VERSION
    
    1. List all the plugins that support ppc64le.
    # oc krew search  | grep -v 'unavailable on linux/ppc64le'
    NAME                            DESCRIPTION                                         INSTALLED
    allctx                          Run commands on contexts in your kubeconfig         no
    assert                          Assert Kubernetes resources                         no
    bulk-action                     Do bulk actions on Kubernetes resources.            no
    ...
    tmux-exec                       An exec multiplexer using Tmux                      no
    view-utilization                Shows cluster cpu and memory utilization            no
    
    1. Install a plugin
    # oc krew install view-utilization
    Updated the local copy of plugin index.
    Installing plugin: view-utilization
    Installed plugin: view-utilization
    \
     | Use this plugin:
     |      kubectl view-utilization
     | Documentation:
     |      https://github.com/etopeter/kubectl-view-utilization
     | Caveats:
     | \
     |  | This plugin needs the following programs:
     |  | * bash
     |  | * awk (gawk,mawk,awk)
     | /
    /
    WARNING: You installed plugin "view-utilization" from the krew-index plugin repository.
       These plugins are not audited for security by the Krew maintainers.
       Run them at your own risk.
    
    1. Use the plugin.
    # oc view-utilization
    Resource     Requests  %Requests      Limits  %Limits  Allocatable  Schedulable         Free
    CPU              7521         16        2400        5        45000        37479        37479
    Memory    33477885952         36  3774873600        4  92931489792  59453603840  59453603840
    

    Tip: There are many more plugins with ppc64le support and do not have the krew manifest updated.

    Thanks to PR 755 we have support for ppc64le.

    References

    https://github.com/kubernetes-sigs/krew/blob/v0.4.4/README.md

    https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz