More and more IBM® Power® clients are modernizing securely with lower risk and faster time to value with cloud-native microservices on Red Hat® OpenShift® running alongside their existing banking and industry applications on AIX, IBM i, and Linux. With the availability of Red Hat OpenShift 4.15 on March 19th, Red Hat and IBM introduced a long-awaited innovation called Multi-Architecture Compute that enables clients to mix Power and x86 worker nodes in a single Red Hat OpenShift cluster. With the release of Red Hat OpenShift 4.15, clients can now run the control plane for a Multi-Architecture Compute cluster natively on Power.
Some tips for setting up a Multi-Arch Compute Cluster
Setting up a multi-arch compute cluster manually, not using automation, you’ll want to follow this process:
Setup the Initial Cluster with the multi payload on Intel or Power for the Control Plane.
Open the network ports between the two environments
Multi-Arch Compute for Red Hat OpenShift Container Platform on IBM Power systems lets one use a pair of compute architectures, such as, ppc64le and amd64, within a single cluster. This feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures. The cluster owner is able to add an additional worker post installation.
With User Provisioned Infrastructure (UPI), the cluster owner may have used automation or manual setup of front-end load balancers. The IBM team provides PowerVS ocp4-upi-powervs, PowerVM ocp4-upi-powervm and HMC ocp4-upi-powervm-hmc automation.
When installing a cluster, the cluster is setup with ab external load balancer, such as haproxy. The external load balancer routes traffic to pools the Ingress Pods, API Server and MachineConfig server. The haproxy configuration is stored at /etc/haproxy/haproxy.cfg.
For instance, the configuration for ingress-https load balancer would look like the following:
frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog
backend ingress-https
balance source
mode tcp
server master0 10.17.15.11:443 check
server master1 10.17.19.70:443 check
server master2 10.17.22.204:443 check
server worker0 10.17.26.89:443 check
server worker1 10.17.30.71:443 check
server worker2 10.17.30.225:443 check
When adding a post-installation worker to a UPI cluster, one must update the ingress-http and ingress-https. Y
a. Find backend ingress-http then before the first server entry add the worker hostnames and ips.
server worker-amd64-0 10.17.15.11:80 check
server worker-amd64-1 10.17.19.70:80 check
b. Find backend ingress-https then before the first server entry add the worker hostnames and ips.
server worker-amd64-0 10.17.15.11:443 check
server worker-amd64-1 10.17.19.70:443 check
c. Save the config file.
Restart the haproxy
# systemctl restart haproxy
You now have the additional workers incorporated into the haproxy, and as the ingress pods are moved from Power to Intel and back. You have a fully functional environment.
P.P.S If you are running very advanced scenarios, you can change the ingresscontroller spec.nodePlacement.nodeSelector to put the workload on specific architectures. see Configuring an Ingress Controller
Red Hat OpenShift 4.14, Multi-Architecture Compute was introduced for the IBM Power and IBM Z platforms, enabling a single heterogeneous cluster across different compute architectures. With the release of Red Hat OpenShift 4.15, clients can now add x86 compute nodes to a multi-architecture enabled cluster running on Power. This simplifies deployment across different environments even further and provides a more consistent management experience. Clients are accelerating their modernization journeys with multi-architecture compute and Red Hat OpenShift by exploiting the best-fit architecture for different solutions and reducing cost and complexity of workloads that require multiple compute architectures.
The Red Hat OpenShift Container Platform runs on IBM Power systems, offering a secure and reliable foundation for modernizing applications and running containerized workloads. Multi-Arch Compute for OpenShift Container Platform lets you use a pair of compute architectures such as, ppc64le and amd64, within a single cluster. This exciting feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures. Join Paul Bastide, IBM Senior Software Engineer, as he introduces the background behind Multi-Arch Compute and then gets you started setting up, configuring, and scheduling workloads. After, Paul will take you through a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster. This presentation sets the background and gets you started so you can set up, configure, and scheduling workloads. There will be a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.
Please join me on 11 April 2024, 9:00 AM ET. Please share any questions by clicking on the Reply button. If you have not done so already, register here and download it to your calendar.
Kube-burner is a Kubernetes performance and scale test orchestration toolset. It provides multi-faceted functionality, the most important of which are summarized below. A new version v1.9.2 is released.
FYI: How to visualize your OpenSCAP compliance reports Discover SCAPinoculars, a tool that helps you to visualize OpenSCAP reports, and the advantages it brings when used with the OpenShift Compliance Operator.
The Power Developer Exchange article dives into using the Red Hat Ansible Automation Platform and how to create PowerVS instances with Ansible. The collection is available at https://github.com/IBM-Cloud/ansible-collection-ibm
Per the blog, you learn to start a sample controller UI and running some sample program such as hello_world.yaml playbook to say hello to Ansible. With Ansible the options are infinite, and there is always something more to explore. We would like to know how you are using this solution, so drop us a comment.
The cluster wasn’t getting loaded, so I checked the following…. and it pointed to an issue of a call back to a cluster inside my firewall setup. The klusterlet shows that it’s an issue with a callback.
oc get pod -n open-cluster-management-agent
❯ oc get klusterlet klusterlet -oyaml Failed to create &SelfSubjectAccessReview{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:SelfSubjectAccessReviewSpec{ResourceAttributes:&ResourceAttributes{Namespace:,Verb:create,Group:cluster.open-cluster-management.io,Version:,Resource:managedclusters,Subresource:,Name:,},NonResourceAttributes:nil,},Status:SubjectAccessReviewStatus{Allowed:false,Reason:,EvaluationError:,Denied:false,},} with bootstrap secret “open-cluster-management-agent” “bootstrap-hub-kubeconfig”: Post “https://api.<XYZ>.com:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews”: dial tcp: lookup api.acmfunc.cp.fyre.ibm.com on 172.30.0.10:53: no such host
Install ibmcloud cli curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
Install the Power IAAS, Transit Gateway, Cloud Internet Services, and Infrastructure Service plugins ibmcloud plugin install power-iaas tg-cli vpc-infrastructure cis
Login to ibmcloud cli ibmcloud login --apikey API_KEY -r us-east
List the datacenters ibmcloud pi datacenters in our case we want wdc06
List the resource group id ❯ ibmcloud resource group dev-resource-group
❯ ibmcloud resource group dev-resource-group
Retrieving resource group dev-resource-group under account 555555555555555 as email@id.xyz...
OK
Name: dev-resource-group
Account ID: 555555555555555
ID: 44444444444444444
Default Resource Group: false
State: ACTIVE
Create a Workspace on a Power Edge Router enabled PowerVS zone. ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
❯ ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
Creating workspace rdr-mac-p2-wdc06...
Name rdr-mac-p2-wdc06
Plan ID f165dd34-3a40-423b-9d95-e90a23f724dd
❯ ibmcloud pi service-target crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::
Targeting service crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::...
Create a Power Network using the CRN so there is an IP Range for the Power workers.
❯ ibmcloud pi network-create-private ocp-net --dns-servers 9.9.9.9 --jumbo --cidr-block 192.168.200.0/24 --gateway 192.168.200.1 --ip-range 192.168.200.10-192.168.200.250
Creating network ocp-net under account Power Cloud - pcloudci as user email@id.xyz...
Network ocp-net created.
ID 3e1add7e-1a12-4a50-9325-87f957b0cd63
Name ocp-net
Type vlan
VLAN 797
CIDR Block 192.168.200.0/24
IP Range [192.168.200.10 192.168.200.250]
Gateway 192.168.200.1
DNS 9.9.9.9, 161.26.0.10, 161.26.0.11
Import the Centos8 stock image
❯ ibmcloud pi image-create CentOS-Stream-8
Creating new image from CentOS-Stream-8 under account Power Cloud - pcloudci as user email@id.xyz...
Image created from CentOS-Stream-8.
Image 4904b3db-1dde-4f3c-a696-92f068816f6f
Name CentOS-Stream-8
Arch ppc64
Container Format bare
Disk Format raw
Hypervisor phyp
Type stock
OS rhel
Size 120
Created 2024-01-24T21:00:29.000Z
Last Updated 2024-01-24T21:00:29.000Z
Description
Storage Type
Storage Pool
Find the closest location.
❯ ibmcloud tg locations
Listing Transit Service locations under account Power Cloud - pcloudci as user email@id.xyz...
OK
Location Location Type Billing Location
eu-es region eu
eu-de region eu
au-syd region ap
eu-gb region eu
br-sao region br
jp-osa region ap
jp-tok region ap
ca-tor region ca
us-south region us
us-east region us
❯ ibmcloud is vpc rdr-mac-p2-wdc06-vpc --output json | jq -r '.status'
available
Add a subnet
❯ ibmcloud is subnet-create sn01 rdr-mac-p2-wdc06-vpc \
--resource-group-id 44444444444444444 \
--ipv4-address-count 256 --zone us-east-1
Creating subnet sn01 in resource group 44444444444444444 under account Power Cloud - pcloudci as user email@id.xyz...
ID 0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3
Name sn01
CRN crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3
Status pending
IPv4 CIDR 10.241.0.0/24
Address available 251
Address total 256
Zone us-east-1
Created 2024-01-24T16:18:10-05:00
ACL ID Name
r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668 causation-browse-capture-behind
Routing table ID Name
r001-216fb1f5-da8f-447e-8515-649bc76b83aa retaining-acquaint-retiring-curry
Public Gateway -
VPC ID Name
r001-372372bb-5f18-4e36-8b39-4444444333 rdr-mac-p2-wdc06-vpc
Resource group ID Name
44444444444444444 dev-resource-group
❯ ibmcloud is subnet-update sn01 --vpc rdr-mac-p2-wdc06-vpc \
--pgw gw01
Updating subnet sn01 under account Power Cloud - pcloudci as user email@id.xyz...
ID 0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3
Name sn01
CRN crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3
Status pending
IPv4 CIDR 10.241.0.0/24
Address available 251
Address total 256
Zone us-east-1
Created 2024-01-24T16:18:10-05:00
ACL ID Name
r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668 causation-browse-capture-behind
Routing table ID Name
r001-216fb1f5-da8f-447e-8515-649bc76b83aa retaining-acquaint-retiring-curry
Public Gateway ID Name
r001-f5f27e42-aed6-4b1a-b121-f234e5149416 gw01
VPC ID Name
r001-372372bb-5f18-4e36-8b39-4444444333 rdr-mac-p2-wdc06-vpc
Resource group ID Name
44444444444444444 dev-resource-group