Category: OpenShift

  • Red Hat OpenShift Multi-Architecture Compute – Demo MicroServices on IBM Power Systems

    Shows a Microservices Application running on Red Hat OpenShift Control Plane on IBM Power Systems with an Intel Worke

  • Updates for End of March 2024

    Here are some great updates for the first half of April 2024.

    Sizing and configuring an LPAR for AI workloads

    Sebastian Lehrig has a great introduction into CPU/AI/NUMA on Power10.

    https://community.ibm.com/community/user/powerdeveloper/blogs/sebastian-lehrig/2024/03/26/sizing-for-ai

    FYI: a new article is published – Improving the User Experience for Multi-Architecture Compute on IBM Power

    More and more IBM® Power® clients are modernizing securely with lower risk and faster time to value with cloud-native microservices on Red Hat® OpenShift® running alongside their existing banking and industry applications on AIX, IBM i, and Linux. With the availability of Red Hat OpenShift 4.15 on March 19th, Red Hat and IBM introduced a long-awaited innovation called Multi-Architecture Compute that enables clients to mix Power and x86 worker nodes in a single Red Hat OpenShift cluster. With the release of Red Hat OpenShift 4.15, clients can now run the control plane for a Multi-Architecture Compute cluster natively on Power.

    Some tips for setting up a Multi-Arch Compute Cluster

    Setting up a multi-arch compute cluster manually, not using automation, you’ll want to follow this process:

    1. Setup the Initial Cluster with the multi payload on Intel or Power for the Control Plane.
    2. Open the network ports between the two environments

    ICMP/TCP/UDP flowing in both directions

    1. Configure the Cluster

    a. Change any MTU between the networks

    oc patch Network.operator.openshift.io cluster --type=merge --patch \
        '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 1350 } , "machine": { "to" : 9100} } } } }'
    

    b. Limit CSI drivers to a single Arch

    oc annotate --kubeconfig /root/.kube/config ns openshift-cluster-csi-drivers \
      scheduler.alpha.kubernetes.io/node-selector=kubernetes.io/arch=amd64
    

    c. Disable offloading (I do this in the ignition)

    d. Move the imagepruner jobs to the architecture that makes the most sense

    oc patch imagepruner/cluster -p '{ "spec" : {"nodeSelector": {"kubernetes.io/arch" : "amd64"}}}' --type merge
    

    e. Move the ingress operator pods to the arch that makes the most sense. If you want the ingress pods to be on Intel then patch the clsuter.

    oc edit IngressController default -n openshift-ingress-operator
    

    Change ingresscontroller.spec.nodePlacement.nodeSelector to use the kubernetes.io/arch: amd64 to move the workfload to Intel only.

    f. use routing via host

    oc patch network.operator/cluster --type merge -p \
      '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"routingViaHost":true}}}}}'
    

    Wait until the MCP is finished updating and has the latest MTU

    g. Download the igntion file and host on the local network via http.

    1. Create a new VSI worker and point to the ignition in userdata
    {
        "ignition": {
            "version": "3.4.0",
            "config": {
                "merge": [
                    {
                        "source": "http://${ignition_ip}:8080/ignition/worker.ign"
                    }
                ]
            }
        },
        "storage": {
            "files": [
                {
                    "group": {},
                    "path": "/etc/hostname",
                    "user": {},
                    "contents": {
                        "source": "data:text/plain;base64,${name}",
                        "verification": {}
                    },
                    "mode": 420
                },
                {
                    "group": {},
                    "path": "/etc/NetworkManager/dispatcher.d/20-ethtool",
                    "user": {},
                    "contents": {
                        "source": "data:text/plain;base64,aWYgWyAiJDEiID0gImVudjIiIF0gJiYgWyAiJDIiID0gInVwIiBdCnRoZW4KICBlY2hvICJUdXJuaW5nIG9mZiB0eC1jaGVja3N1bW1pbmciCiAgL3NiaW4vZXRodG9vbCAtLW9mZmxvYWQgZW52MiB0eC1jaGVja3N1bW1pbmcgb2ZmCmVsc2UgCiAgZWNobyAibm90IHJ1bm5pbmcgdHgtY2hlY2tzdW1taW5nIG9mZiIKZmkKaWYgc3lzdGVtY3RsIGlzLWZhaWxlZCBOZXR3b3JrTWFuYWdlci13YWl0LW9ubGluZQp0aGVuCnN5c3RlbWN0bCByZXN0YXJ0IE5ldHdvcmtNYW5hZ2VyLXdhaXQtb25saW5lCmZpCg==",
                        "verification": {}
                    },
                    "mode": 420
                }
            ]
        }
    }
    

    ${name} is base64 encoded.

    1. Post configuration tasks

    a. Configure shared storage using the nfs provisioner and limit to running from the architecture that is hosting the NFS shared volumes.

    b. Approve the CSRs for the workers. Do this carefully as it’s possible to lose the count as it may include Machine updates/csrs.

    1. Check the cluster operators and nodes it should be up and working.
  • Multi-Architecture Compute: Managing User Provisioned Infrastructure Load Balancers with Post-Installation workers

    From https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/03/21/multi-architecture-compute-managing-user-provision?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

    Multi-Arch Compute for Red Hat OpenShift Container Platform on IBM Power systems lets one use a pair of compute architectures, such as, ppc64le and amd64, within a single cluster. This feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures. The cluster owner is able to add an additional worker post installation.

    With User Provisioned Infrastructure (UPI), the cluster owner may have used automation or manual setup of front-end load balancers. The IBM team provides PowerVS ocp4-upi-powervs, PowerVM ocp4-upi-powervm and HMC ocp4-upi-powervm-hmc automation.

    When installing a cluster, the cluster is setup with ab external load balancer, such as haproxy. The external load balancer routes traffic to pools the Ingress Pods, API Server and MachineConfig server. The haproxy configuration is stored at /etc/haproxy/haproxy.cfg.

    For instance, the configuration for ingress-https load balancer would look like the following:

    frontend ingress-https
            bind *:443
            default_backend ingress-https
            mode tcp
            option tcplog
    
    backend ingress-https
            balance source
            mode tcp
            server master0 10.17.15.11:443 check
            server master1 10.17.19.70:443 check
            server master2 10.17.22.204:443 check
            server worker0 10.17.26.89:443 check
            server worker1 10.17.30.71:443 check
            server worker2 10.17.30.225:443 check
    

    When adding a post-installation worker to a UPI cluster, one must update the ingress-http and ingress-https. Y

    1. Get the IP and hostname
    # oc get nodes -lkubernetes.io/arch=amd64 --no-headers=true -ojson | jq  -c '.items[].status.addresses'
    [{"address":"10.17.15.11","type":"InternalIP"},{"address":"worker-amd64-0","type":"Hostname"}]
    [{"address":"10.17.19.70","type":"InternalIP"},{"address":"worker-amd64-1","type":"Hostname"}]
    
    1. Edit the /etc/haproxy/haproxy.cfg

    a. Find backend ingress-http then before the first server entry add the worker hostnames and ips.

            server worker-amd64-0 10.17.15.11:80 check
            server worker-amd64-1 10.17.19.70:80 check
    

    b. Find backend ingress-https then before the first server entry add the worker hostnames and ips.

            server worker-amd64-0 10.17.15.11:443 check
            server worker-amd64-1 10.17.19.70:443 check
    

    c. Save the config file.

    1. Restart the haproxy
    # systemctl restart haproxy
    

    You now have the additional workers incorporated into the haproxy, and as the ingress pods are moved from Power to Intel and back. You have a fully functional environment.

    Best wishes.

    Paul

    P.S. You can learn more about scalling up the ingress controller at Scaling an Ingress Controller

    $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge
    

    P.P.S If you are running very advanced scenarios, you can change the ingresscontroller spec.nodePlacement.nodeSelector to put the workload on specific architectures. see Configuring an Ingress Controller

    nodePlacement:
     nodeSelector:
       matchLabels:
         kubernetes.io/arch: ppc64le
  • OpenShift 4.15

    IBM announced the availability of Red Hat OpenShift 4.15 available on IBM Power. Read more about it in
    https://community.ibm.com/community/user/powerdeveloper/blogs/brandon-pederson1/2024/03/15/red-hat-openshift-415-now-available-on-ibm-power

    I worked on the following:

    Red Hat OpenShift 4.14, Multi-Architecture Compute was introduced for the IBM Power and IBM Z platforms, enabling a single heterogeneous cluster across different compute architectures. With the release of Red Hat OpenShift 4.15, clients can now add x86 compute nodes to a multi-architecture enabled cluster running on Power. This simplifies deployment across different environments even further and provides a more consistent management experience. Clients are accelerating their modernization journeys with multi-architecture compute and Red Hat OpenShift by exploiting the best-fit architecture for different solutions and reducing cost and complexity of workloads that require multiple compute architectures.

  • Getting started with Multi-Arch Compute workloads with your Red Hat OpenShift cluster

    FYI: Webinar: Getting started with Multi-Arch Compute workloads with your Red Hat OpenShift cluster

    Summary



    The Red Hat OpenShift Container Platform runs on IBM Power systems, offering a secure and reliable foundation for modernizing applications and running containerized workloads.

Multi-Arch Compute for OpenShift Container Platform lets you use a pair of compute architectures such as, ppc64le and amd64, within a single cluster. This exciting feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures.

Join Paul Bastide,  IBM Senior Software Engineer, as he introduces the background behind Multi-Arch Compute and then gets you started setting up, configuring, and scheduling workloads. After, Paul will take you through a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.

This presentation sets the background and gets you started so you can set up, configure, and scheduling workloads. There will be a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.

    Please join me on 11 April 2024, 9:00 AM ET. Please share any questions by clicking on the Reply button. If you have not done so already, register here and download it to your calendar.

  • February 2024 Updates

    Here are some updates for February 2024

    Open Source Container images for Power now available in IBM Container Registry

    The Power team has added a new image:

    envoy1.29.0podman pull icr.io/ppc64le-oss/envoy-ppc64le:1.29.0Feb 7, 2024
    https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

    Kube-burner is a Kubernetes performance and scale test orchestration toolset. It provides multi-faceted functionality, the most important of which are summarized below. A new version v1.9.2 is released.

    https://github.com/kube-burner/kube-burner/tree/v1.9.2

    Looking to learn more about Multi-Arch Compute on IBM Power? The following blog details how to set up an IBM PowerVS Workspace to a IBM Cloud Virtual Private Cloud: https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/26/setting-up-an-ibm-powervs-workspace-to-a-ibm-cloud 

    #IBM #IBMPower #Power10 #PowerVS # #MultiArchCompute #PDeX

    Cert-manager is a cluster-wide service that provides application certificate lifecycle management. Learn how to use the cert-manager with your OpenShift cluster on IBM Power: https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/01/18/cert-manager-operator-for-red-hat-openshift-v113 

    #IBM #Power10 #IBMPower #RedHat #OpenShift #clusters #clustermanagement #PDeX

    FYI: How to visualize your OpenSCAP compliance reports Discover SCAPinoculars, a tool that helps you to visualize OpenSCAP reports, and the advantages it brings when used with the OpenShift Compliance Operator.

    https://developers.redhat.com/articles/2024/02/08/how-visualize-your-openscap-compliance-reports

    My colleague Yussuf cut a new release v6.0.0 of ocp4-upi-powervs Please be sure to pull the latest code and use it when appropriate.

    FYI: My colleague @Punith Kenchappa posted an article on configuring your Multi-Arch Compute Pods with NodeAffinity see Controlling Pod placement based on weighted node-affininty with your Multi-Arch Compute cluster. It’s super helpful for scheduling workloads across architecture types.

  • kube ns delete stuck in terminating

    Per https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=console-namespace-is-stuck-in-terminating-state, you can delete the Namespace stuck in the Terminating Phase.

    Recipe

    1. Grab the namespace json

    oc get namespace ma-operator -o json > tmp.json

    2. Edit tmp.json to remove the finalizer

    3. Start the proxy

    oc proxy &

    4. Delete the namespace

    curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/ma-operator/finalize
  • January 2023 – Lessons Learned

    For the month, I learned lots of things, and wanted to share them as part of snippets that you might find useful.

    Create a virtual server instance in IBM Power Virtual Server using Red Hat Ansible Automation Platform

    The Power Developer Exchange article dives into using the Red Hat Ansible Automation Platform and how to create PowerVS instances with Ansible. The collection is available at https://github.com/IBM-Cloud/ansible-collection-ibm

    Per the blog, you learn to start a sample controller UI and running some sample program such as hello_world.yaml playbook to say hello to Ansible. With Ansible the options are infinite, and there is always something more to explore. We would like to know how you are using this solution, so drop us a comment. 

    IBM Power Developer Exchange

    kube-burner is now a CNCF project

    kube-burner is a Kubernetes performance and scale test orchestration framework written in golang

    kube-burner

    Clock Drift Fix for Podman

    To update the default Podman-Machine:

    podman machine ssh --username root -- sed -i 's/^makestep\ .*$/makestep\ 1\ -1/' /etc/chrony.conf
    podman machine ssh --username root -- systemctl restart chronyd

    https://github.com/containers/podman/issues/11541#issuecomment-1416695974

    Advanced Cluster Manage cross Networks

    The cluster wasn’t getting loaded, so I checked the following…. and it pointed to an issue of a call back to a cluster inside my firewall setup. The klusterlet shows that it’s an issue with a callback.

    oc get pod -n open-cluster-management-agent


    ❯ oc get klusterlet klusterlet -oyaml
    Failed to create &SelfSubjectAccessReview{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:SelfSubjectAccessReviewSpec{ResourceAttributes:&ResourceAttributes{Namespace:,Verb:create,Group:cluster.open-cluster-management.io,Version:,Resource:managedclusters,Subresource:,Name:,},NonResourceAttributes:nil,},Status:SubjectAccessReviewStatus{Allowed:false,Reason:,EvaluationError:,Denied:false,},} with bootstrap secret “open-cluster-management-agent” “bootstrap-hub-kubeconfig”: Post “https://api.<XYZ>.com:6443/apis/authorization.k8s.io/v1/selfsubjectaccessreviews”: dial tcp: lookup api.acmfunc.cp.fyre.ibm.com on 172.30.0.10:53: no such host

    Fun way to look at design

  • Setting up an IBM PowerVS Workspace to a IBM Cloud VPC

    As part of the Red Hat OpenShift Multi-Arch Compute effort, I’ve been working on Power and Intel Compute architecture pairs:

    1. Intel Control Plane with Power and Intel Compute
    2. Power Control Plane with Power and Intel Compute

    This article helps setup an IBM Cloud VPC with IBM Power Virtual Server, you can follow this recipe:

    1. Install ibmcloud cli curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
    2. Install the Power IAAS, Transit Gateway, Cloud Internet Services, and Infrastructure Service plugins ibmcloud plugin install power-iaas tg-cli vpc-infrastructure cis
    3. Login to ibmcloud cli ibmcloud login --apikey API_KEY -r us-east
    4. List the datacenters ibmcloud pi datacenters in our case we want wdc06
    5. List the resource group id ❯ ibmcloud resource group dev-resource-group
    ❯ ibmcloud resource group dev-resource-group
    Retrieving resource group dev-resource-group under account 555555555555555 as email@id.xyz...
    OK
    
                              
    Name:                     dev-resource-group
    Account ID:               555555555555555
    ID:                       44444444444444444
    Default Resource Group:   false
    State:                    ACTIVE
    
    1. Create a Workspace on a Power Edge Router enabled PowerVS zone. ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    ❯ ibmcloud pi workspace-create rdr-mac-p2-wdc06 --datacenter wdc06 --group 44444444444444444 --plan public
    Creating workspace rdr-mac-p2-wdc06...
    
    Name       rdr-mac-p2-wdc06
    Plan ID    f165dd34-3a40-423b-9d95-e90a23f724dd
    
    1. Get the ID (2nd in response)
    ❯ ibmcloud pi workspaces 2>&1 | grep rdr-mac-p2-wdc06
    crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::     7777777-6666-5555-44444-1111111   rdr-mac-p2-wdc06
    
    1. Get the workspace, and check if it’s status is active
    ❯ ibmcloud pi workspace 7777777-6666-5555-44444-1111111 --json
    {
        "capabilities": {
            "cloud-connections": false,
            "power-edge-router": true,
            "power-vpn-connections": false,
            "transit-gateway-connection": false
        },
        "details": {
            "creationDate": "2024-01-24T20:52:59.178Z",
            "crn": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
            "powerEdgeRouter": {
                "state": "active",
                "type": "automated"
            }
        },
        "id": "7777777-6666-5555-44444-1111111",
        "location": {
            "region": "wdc06",
            "type": "data-center",
            "url": "https://us-east.power-iaas.cloud.ibm.com"
        },
        "name": "rdr-mac-p2-wdc06",
        "status": "active",
        "type": "off-premises"
    }
    
    1. Target the workspace
    ❯ ibmcloud pi service-target crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::
    Targeting service crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::...
    
    1. Create a Power Network using the CRN so there is an IP Range for the Power workers.
    ❯ ibmcloud pi network-create-private ocp-net --dns-servers 9.9.9.9 --jumbo --cidr-block 192.168.200.0/24 --gateway 192.168.200.1 --ip-range 192.168.200.10-192.168.200.250
    Creating network ocp-net under account Power Cloud - pcloudci as user email@id.xyz...
    Network ocp-net created.
                 
    ID           3e1add7e-1a12-4a50-9325-87f957b0cd63
    Name         ocp-net
    Type         vlan
    VLAN         797
    CIDR Block   192.168.200.0/24
    IP Range     [192.168.200.10 192.168.200.250]
    Gateway      192.168.200.1
    DNS          9.9.9.9, 161.26.0.10, 161.26.0.11
    
    1. Import the Centos8 stock image
    ❯ ibmcloud pi image-create CentOS-Stream-8       
    Creating new image from CentOS-Stream-8 under account Power Cloud - pcloudci as user email@id.xyz...
    Image created from CentOS-Stream-8.
                       
    Image              4904b3db-1dde-4f3c-a696-92f068816f6f
    Name               CentOS-Stream-8
    Arch               ppc64
    Container Format   bare
    Disk Format        raw
    Hypervisor         phyp
    Type               stock
    OS                 rhel
    Size               120
    Created            2024-01-24T21:00:29.000Z
    Last Updated       2024-01-24T21:00:29.000Z
    Description        
    Storage Type       
    Storage Pool    
    
    1. Find the closest location.
    ❯ ibmcloud tg locations
    Listing Transit Service locations under account Power Cloud - pcloudci as user email@id.xyz...
    OK
    Location   Location Type   Billing Location   
    eu-es      region          eu   
    eu-de      region          eu   
    au-syd     region          ap   
    eu-gb      region          eu   
    br-sao     region          br   
    jp-osa     region          ap   
    jp-tok     region          ap   
    ca-tor     region          ca   
    us-south   region          us   
    us-east    region          us   
    
    1. Create the Transit Gateway
    # ibmcloud tg gateway-create --name rdr-mac-p2-wdc06-tg --location us-east --routing global \
        --resource-group-id 44444444444444444 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "pending"
    }%   
    
    1. Wait until the transit gateway is available.
    ❯ ibmcloud tg gw 3333333-22222-1111-0000-dad4b38f5063 --output json
    {
        "created_at": "2024-01-24T21:09:23.184Z",
        "crn": "crn:v1:bluemix:public:transit:us-east:a/555555555555555::gateway:3333333-22222-1111-0000-dad4b38f5063",
        "global": true,
        "id": "3333333-22222-1111-0000-dad4b38f5063",
        "location": "us-east",
        "name": "rdr-mac-p2-wdc06-tg",
        "resource_group": {
            "id": "44444444444444444"
        },
        "status": "available"
    }
    
    1. Create a VPC with at least one subnet with a Public Gateway
    ibmcloud is vpc-create rdr-mac-p2-wdc06-vpc --resource-group-id 44444444444444444 --output JSON
    {
        "classic_access": false,
        "created_at": "2024-01-24T21:12:46.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "cse_source_ips": [
            {
                "ip": {
                    "address": "10.12.98.66"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
                    "name": "us-east-1"
                }
            },
            {
                "ip": {
                    "address": "10.12.108.205"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-2",
                    "name": "us-east-2"
                }
            },
            {
                "ip": {
                    "address": "10.22.56.222"
                },
                "zone": {
                    "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-3",
                    "name": "us-east-3"
                }
            }
        ],
        "default_network_acl": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::network-acl:r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/network_acls/r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "id": "r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668",
            "name": "causation-browse-capture-behind"
        },
        "default_routing_table": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333/routing_tables/r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "id": "r001-216fb1f5-da8f-447e-8515-649bc76b83aa",
            "name": "retaining-acquaint-retiring-curry",
            "resource_type": "routing_table"
        },
        "default_security_group": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::security-group:r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/security_groups/r001-ffa5c27a-5f18-5f18-b679-4444444333",
            "id": "r001-ffa5c27a-6073-4e2e-b679-64560cff8b5b",
            "name": "jailer-lurch-treasure-glacial"
        },
        "dns": {
            "enable_hub": false,
            "resolution_binding_count": 0,
            "resolver": {
                "servers": [
                    {
                        "address": "161.26.0.10"
                    },
                    {
                        "address": "161.26.0.11"
                    }
                ],
                "type": "system",
                "configuration": "default"
            }
        },
        "health_reasons": null,
        "health_state": "inapplicable",
        "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
        "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
        "name": "rdr-mac-p2-wdc06-vpc",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "vpc",
        "status": "pending"
    }
    
    1. Check the status is available
    ❯ ibmcloud is vpc rdr-mac-p2-wdc06-vpc --output json | jq -r '.status'
    available
    
    1. Add a subnet
    ❯ ibmcloud is subnet-create sn01 rdr-mac-p2-wdc06-vpc \
            --resource-group-id 44444444444444444 \
            --ipv4-address-count 256 --zone us-east-1   
    Creating subnet sn01 in resource group 44444444444444444 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      -   
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group      
    
    1. Attach a public gateway to the subnet
    ❯ ibmcloud is public-gateway-create gw01 rdr-mac-p2-wdc06-vpc us-east-1 \
            --resource-group-id 44444444444444444 \
            --output JSON
    {
        "created_at": "2024-01-24T21:21:18.000Z",
        "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::public-gateway:r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "floating_ip": {
            "address": "150.239.80.219",
            "crn": "crn:v1:bluemix:public:is:us-east-1:a/555555555555555::floating-ip:r001-022b865a-4674-4791-94f7-ee4fac646287",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/floating_ips/r001-022b865a-4674-4791-94f7-ee4fac646287",
            "id": "r001-022b865a-4674-4791-94f7-ee4fac646287",
            "name": "gw01"
        },
        "href": "https://us-east.iaas.cloud.ibm.com/v1/public_gateways/r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "id": "r001-f5f27e42-aed6-4b1a-b121-f234e5149416",
        "name": "gw01",
        "resource_group": {
            "href": "https://resource-controller.cloud.ibm.com/v2/resource_groups/44444444444444444",
            "id": "44444444444444444",
            "name": "dev-resource-group"
        },
        "resource_type": "public_gateway",
        "status": "available",
        "vpc": {
            "crn": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
            "href": "https://us-east.iaas.cloud.ibm.com/v1/vpcs/r001-372372bb-5f18-4e36-8b39-4444444333",
            "id": "r001-372372bb-5f18-4e36-8b39-4444444333",
            "name": "rdr-mac-p2-wdc06-vpc",
            "resource_type": "vpc"
        },
        "zone": {
            "href": "https://us-east.iaas.cloud.ibm.com/v1/regions/us-east/zones/us-east-1",
            "name": "us-east-1"
        }
    }%
    
    1. Attach the Public Gateway to the Subnet
    ❯ ibmcloud is subnet-update sn01 --vpc rdr-mac-p2-wdc06-vpc \
            --pgw gw01
    Updating subnet sn01 under account Power Cloud - pcloudci as user email@id.xyz...
                           
    ID                  0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Name                sn01   
    CRN                 crn:v1:bluemix:public:is:us-east-1:a/555555555555555::subnet:0757-46e9ca2e-4c63-4bce-8793-f04251d9bdb3   
    Status              pending   
    IPv4 CIDR           10.241.0.0/24   
    Address available   251   
    Address total       256   
    Zone                us-east-1   
    Created             2024-01-24T16:18:10-05:00   
    ACL                 ID                                          Name      
                        r001-0a0afc6c-0943-4a0f-b998-e5e87ec93668   causation-browse-capture-behind      
                           
    Routing table       ID                                          Name      
                        r001-216fb1f5-da8f-447e-8515-649bc76b83aa   retaining-acquaint-retiring-curry      
                           
    Public Gateway      ID                                          Name      
                        r001-f5f27e42-aed6-4b1a-b121-f234e5149416   gw01      
                           
    VPC                 ID                                          Name      
                        r001-372372bb-5f18-4e36-8b39-4444444333   rdr-mac-p2-wdc06-vpc      
                           
    Resource group      ID                                 Name      
                        44444444444444444   dev-resource-group    
    
    1. Attach the PER network to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name powervs-conn --network-id crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111:: --network-type power_virtual_server --output json
    
    {
        "created_at": "2024-01-25T00:37:37.364Z",
        "id": "75646025-3ea2-45e2-a5b3-36870a9de141",
        "name": "powervs-conn",
        "network_id": "crn:v1:bluemix:public:power-iaas:wdc06:a/555555555555555:7777777-6666-5555-44444-1111111::",
        "network_type": "power_virtual_server",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. You should see the status attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 75646025-3ea2-45e2-a5b3-36870a9de141 --output json | jq -r '.status'
    attached
    
    1. Attach the VPC to the TG
    ❯ ibmcloud tg connection-create 3333333-22222-1111-0000-dad4b38f5063 --name vpc-conn --network-id crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333 --network-type vpc --output json
    {
        "created_at": "2024-01-25T00:40:26.629Z",
        "id": "777777777-eef2-4a27-832d-6c80d2ac599f",
        "name": "vpc-conn",
        "network_id": "crn:v1:bluemix:public:is:us-east:a/555555555555555::vpc:r001-372372bb-5f18-4e36-8b39-4444444333",
        "network_type": "vpc",
        "prefix_filters": null,
        "prefix_filters_default": "permit",
        "status": "pending"
    }
    
    1. Check the status it should be attached
    ❯ ibmcloud tg connection 3333333-22222-1111-0000-dad4b38f5063 777777777-eef2-4a27-832d-6c80d2ac599f --output json | jq -r '.status'
    attached
    

    You now have a VPC and a Power Workspace connected. The next step is to setup the Security Groups to enable communication between subnets.

    More details to come and help your adoption of Multi-Arch Compute.