Blog
-
Krew plugin on ppc64le
Posted to https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/19/kubernetes-krew-plugin-support-for-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63
Hey everyone,
Krew, the kubectl plugin package manager, is now available on Power. The release v0.4.4 has a ppc64le download. You can download and start taking advantage of the krew plugin list. ppc64le download It also works with OpenShift.
The Krew website has a list of plugins](https://krew.sigs.k8s.io/plugins/). Not all of the plugins support ppc64le, however may are cross-arch scripts and are cross compiled such as
view-utilization
.To take advantage of Krew with OpenShift, here are a few steps
- Download the krew-linux plugin
# curl -L -O https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 3977k 100 3977k 0 0 6333k 0 --:--:-- --:--:-- --:--:-- 30.5M
- Extract the krew plugin
tar xvf krew-linux_ppc64le.tar.gz ./LICENSE ./krew-linux_ppc64le
- Move to the /usr/bin so it’s picked up by
oc
.
mv krew-linux_ppc64le /usr/bin/kubectl-krew
- Update the krew plugin
# kubectl krew update WARNING: To be able to run kubectl plugins, you need to add the following to your ~/.bash_profile or ~/.bashrc: export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" and restart your shell. Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git. Updated the local copy of plugin index.
- Update your shell:
# echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
- Restart your session (exit and come back to the shell so the variables are loaded)
- Try oc krew list
# oc krew list PLUGIN VERSION
- List all the plugins that support ppc64le.
# oc krew search | grep -v 'unavailable on linux/ppc64le' NAME DESCRIPTION INSTALLED allctx Run commands on contexts in your kubeconfig no assert Assert Kubernetes resources no bulk-action Do bulk actions on Kubernetes resources. no ... tmux-exec An exec multiplexer using Tmux no view-utilization Shows cluster cpu and memory utilization no
- Install a plugin
# oc krew install view-utilization Updated the local copy of plugin index. Installing plugin: view-utilization Installed plugin: view-utilization \ | Use this plugin: | kubectl view-utilization | Documentation: | https://github.com/etopeter/kubectl-view-utilization | Caveats: | \ | | This plugin needs the following programs: | | * bash | | * awk (gawk,mawk,awk) | / / WARNING: You installed plugin "view-utilization" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk.
- Use the plugin.
# oc view-utilization Resource Requests %Requests Limits %Limits Allocatable Schedulable Free CPU 7521 16 2400 5 45000 37479 37479 Memory 33477885952 36 3774873600 4 92931489792 59453603840 59453603840
Tip: There are many more plugins with ppc64le support and do not have the krew manifest updated.
Thanks to PR 755 we have support for ppc64le.
References
https://github.com/kubernetes-sigs/krew/blob/v0.4.4/README.md
https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz
-
Lessons and Notes for the Week
Here are some lessons learned from the week (and shared with other):
TIP: How to create a Power Workspace from the CLI
This document outlines the steps necessary to create a PowerVS workspace.
1. Login to IBM Cloud
ibmcloud login --sso -r jp-osa -c 65b64c111111114bbfbd893c2c
-r jp-osa
targets the region-c 65b64c111111114bbfbd893c2c
is the account that is being targetted.
2. Install the Plugins for PowerVS and VPC.
As a good practice install, the powervs and vpc plugins.
❯ ibmcloud plugin install power-iaas -f ❯ ibmcloud plugin install 'vpc-infrastructure' -f
3. Create a Power Systems workspace
- The two elements you should configure below are the PVS_REGION and the WORKSPACE_NAME. The rest will create a new Workspace.
WORKSPACE_NAME=rdr-mac-osa-n1 SERVICE_NAME=power-iaas RESOURCE_GROUP_NAME=my-resource-group SERVICE_PLAN_NAME=power-virtual-server-group PVS_REGION=osa21 ❯ ibmcloud resource service-instance-create \ "${WORKSPACE_NAME}" \ "${SERVICE_NAME}" \ "${SERVICE_PLAN_NAME}" \ "${PVS_REGION}" \ -g "${RESOURCE_GROUP_NAME}" Creating service instance rdr-mac-osa-n1 in resource group my-resource-group of account Power Account as pb@ibm.xyz... OK Service instance rdr-mac-osa-n1 was created. Name: rdr-mac-osa-n1 ID: crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460e8c2e4bbfbd893c2c:e3772ff7-48eb-4c81-9ee0-07cf0b5547a5:: GUID: e3772ff7-48eb-4c81-9ee0-07cf0b5547a5 Location: osa21 State: provisioning Type: service_instance Sub Type: Allow Cleanup: false Locked: false Created at: 2023-07-11T14:12:07Z Updated at: 2023-07-11T14:12:10Z Last Operation: Status create in progress Message Started create instance operation
Thus you are able to create a PowerVS Workspace from the CLI.
Note: Flatcar Linux
Flatcar Container Linux A community Linux distribution designed for container workloads, with high security and low maintenance
I learned about flatcar linux this week… exciting stuff.
Demonstration: Tang Server Automation on PowerVM
I work on the OpenShift Container Platform on Power Team which created a few video to discuss The
powervm-tang-server-automation
project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management. Thanks to Aditi Jadhav for presenting.The Overview
The Demonstration
New Blog: Installing OpenShift on IBM Power Virtual Servers with IPI
My colleague Ashwin posted a new blog using IPI – This blog covers creating and destroying a “public” OpenShift cluster i.e., one which is accessible through the internet and the nodes of the cluster can access the internet.
https://community.ibm.com/community/user/powerdeveloper/blogs/ashwin-hendre/2023/07/10/powervs-ipi
-
Notes for the Week
Here are some notes from the week:
The Network Observability Operator is now released on IBM Power Systems.
Network observability operator now available on Power.
Network Observability (NETOBSERV) for IBM Power, little endian 1 for RHEL 9 ppc64le
https://access.redhat.com/errata/RHSA-2023:3905?sc_cid=701600000006NHXAA2 https://community.ibm.com/community/user/powerdeveloper/discussion/network-observability-operator-now-available-on-power -
Notes from the Week
A few things I learned this week are:
There is a cool session on IPI PowerVS for OpenShift called Introducing Red Hat OpenShift Installer-Provisioned Installation (IPI) for IBM Power Virtual Servers Webinar.
Did you know that you can run Red Hat OpenShift clusters on IBM Power servers? Maybe you do, but you don’t have Power hardware to try it out on, or you don’t have time to learn about OpenShift using the User-Provisioned method of installation. Let us introduce you to the Installer-Provisioned Installation method for OpenShift clusters, also called “an IPI install” on IBM Power Virtual Servers. IPI installs are much simpler than UPI installs, because, the installer itself has built-in logic that can provision each and every component your cluster needs.
https://community.ibm.com/community/user/powerdeveloper/discussion/introducing-red-hat-openshift-installer-provisioned-installation-ipi-for-ibm-power-virtual-servers-webinar
Join us on 27 July at 10 AM ET for this 1-hour live webinar to learn why the benefits of the IPI installation method goes well beyond installation and into the cluster lifecycle. We’ll show you how to deploy OpenShift IPI on Power Virtual Server with a live demo. And finally, we’ll share some ways that you can try it yourself. Please share any questions by clicking on the Reply button. If you have not done so already, register to join here and get your calendar invite.Of all the things, I finally stated using reverse-search in the shell. CTRL+R on the commandline. link or link
The IBM Power Systems team announced a Tech Preview of Red Hat Ansible Automation Platform on IBM Power
Continuing our journey to enable our clients’ automation needs, IBM is excited to announce the Technical Preview of Ansible Automation Platform running on IBM Power! Now, in addition to automating against IBM Power endpoints (e.g., AIX, IBM i, etc.), clients will be able to run Ansible Automation Platform components on IBM Power. In addition to IBM Power support for Ansible Automation Platform, Red Hat is also providing support for Ansible running on IBM Z Systems. Now, let’s dive into the specifics of what this entails.
My team released a new version of the PowerVM Tang Server Automation to fix a routing problem:
The
https://github.com/IBM/powervm-tang-server-automation/tree/v1.0.1powervm-tang-server-automation
project provides Terraform based automation code to help with the deployment of Network Bound Disk Encryption (NBDE) on IBM® Power Systems™ virtualization and cloud management.The RH Ansible/IBM teams have released Red Hat Ansible Lightspeed with IBM Watson Code I’m excited to try it out and expand my Ansible usage.
It’s great to see the Kernel Module Manager release 1.1 with Day-1 support through KMM
The Kernel Module Management Operator manages out of tree kernel modules in Kubernetes.
https://github.com/kubernetes-sigs/kernel-module-management -
Tidbits – Terraform and Multiarch
Here is a compendium of tidbits from the week:
FYI: Terraform v1.5.0 is released. It’s probably worth updating. link to ppc64le build
❯ brew upgrade terraform
There are new features:
https://github.com/hashicorp/terraform/releases/tag/v1.5.0check
blocks for validating infrastructure: Module and configuration authors can now write independent check blocks within their configuration to validate assertions about their infrastructure. Adds a newstrcontains
function that checks whether a given string contains a given substring. (#33069)Blog: Build multi-architecture container images on GitHub Actions using native nodes
My colleague, Yussuf, has posted a blog on using GitHub actions with Native Nodes. It’s to the point, and super helpful. link
There are already a few good blogs[1][2][3] available in this group that demonstrates how to build multi-arch container images on GitHub Actions using Buildx. However, they are using QEMU which is a free and open-source emulator for running cross-builds. Using QEMU presents its own problems where the main point is the slowness which cannot match when we run the builds natively. Even there is no guarantee that the build will always succeed when we use low-level code in the project. These pain points forced us to use native nodes as part of the same Buildx workflow inside a GitHub Action. If you as well want to use native nodes to build your projects on multiple architectures including ppc64le then this article is for you.
https://community.ibm.com/community/user/powerdeveloper/blogs/yussuf-shaikh/2023/06/14/workflow-using-buildx-remote-buildersTip: Custom Build of QEMU on Centos 9
I had to do a cross-build of ARM64, so I used QEMU. It’s not necessarily straight-forward on CENTOS9, here are the steps I took:
- Connect to my machine.
❯ ssh cloud-user@<machine IP>
- Switch to Root
❯ sudo -s
- Enable the Code Ready Builders repo
❯ dnf config-manager --set-enabled crb
- install a bunch of dependencies
❯ dnf install -y git make pip vim ninja-build gcc glib2-devel.x86_64 pixman.x86_64 libjpeg-devel giflib-devel pixman-devel cairo-devel pango-devel qemu-kvm edk2-aarch64
- Install Python dependencies
❯ pip install Sphinx sphinx-rtd-theme
- Pull the Qemu Code
❯ git clone https://git.qemu.org/git/qemu.git
- Change to QEMU
❯ cd qemu
- Apply the patch
From 14920d35f053c8effd17a232b5144ef43465a85e Mon Sep 17 00:00:00 2001 From: root <root@cross-build-pbastide.novalocal> Date: Tue, 20 Jun 2023 10:02:39 -0400 Subject: [PATCH] test --- configure | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/configure b/configure index 01a53576a7..02cabb556b 100755 --- a/configure +++ b/configure @@ -371,6 +371,7 @@ else # be the result of a missing compiler. targetos=bogus fi +targetos=linux # OS specific @@ -508,7 +509,7 @@ case "$cpu" in sparc64) CPU_CFLAGS="-m64 -mcpu=ultrasparc" ;; esac - +CPU_CFLAGS="-m64 -mcx16" check_py_version() { # We require python >= 3.7. # NB: a True python conditional creates a non-zero return code (Failure) --
The above patch is generated from
git format-patch -1 HEAD
.- Create the build directory
❯ mkdir -p build
- Configure the qemu build
❯ ./configure --target-list=aarch64-softmmu --enable-virtfs --enable-slirp
- Build with all your processors.
❯ make -j`nproc`
- The file should exist and be in your native architecture.
❯ file ./build/qemu-system-aarch64 ./build/qemu-system-aarch64: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=1871423864de4428c4721b8e805304f0142bed6a, for GNU/Linux 3.2.0, with debug_info, not stripped
- Download Centos qcow2 from link
❯ curl -O https://cloud.centos.org/centos/9-stream/aarch64/images/CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2
- Update the
.ssh/authorized_keys
modprobe nbd qemu-nbd -c /dev/nbd0 CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2 mount /dev/nbd0p1 /mnt mkdir -p /mnt/root/.ssh/
- Edit the keys to add your public key.
vim /mnt/root/.ssh/authorized_keys
- Unmount the qcow2 mount
umount /mnt qemu-nbd -d /dev/nbd0
- Start the QEMU vm.
build/qemu-system-aarch64 -m 4G -M virt -cpu cortex-a57 \ -bios /usr/share/edk2/aarch64/QEMU_EFI.fd \ -drive if=none,file=CentOS-Stream-GenericCloud-9-20220621.1.aarch64.qcow2,id=hd0 \ -device virtio-blk-device,drive=hd0 \ -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp:127.0.0.1:5555-:22 \ -nographic
Finally, you’ll see:
CentOS Stream 9 Kernel 5.14.0-115.el9.aarch64 on an aarch64 Activate the web console with: systemctl enable --now cockpit.socket cross-build-pbastide login:
It’s ready-to-go.
References
- http://cdn.kernel.org/pub/linux/kernel/people/will/docs/qemu/qemu-arm64-howto.html
- https://fedoraproject.org/wiki/Architectures/AArch64/Install_with_QEMU
- https://wiki-archive.linaro.org/LEG/UEFIforQEMU
- https://packages.debian.org/experimental/qemu-efi-aarch64
- https://www.redhat.com/sysadmin/install-epel-linux
- https://wiki.qemu.org/Hosts/Linux
- https://github.com/Automattic/node-canvas/issues/1065#issuecomment-1278496824
- https://wiki.debian.org/Arm64Qemu
-
A few more notes from the week
A few things I learned about this week are:
IBM Redbooks: Implementing, Tuning, and Optimizing Workloads with Red Hat OpenShift on IBM Power
A new document provides hints and tips about how to install your Red Hat OpenShift cluster, and also provide guidance about how to size and tune your environment. I’m reading through it now – and excited.
Upcoming Webinar: Powering AI Innovation: Exploring IBM Power with MMA and ONNX on Power10 Featuring Real Time Use Cases
The session is going to explore showcase the impressive capabilities of MMA (Matrix Math Accelerator) on the cutting-edge Power10 architecture.
CSI Cinder Configuration for a different availability zone
I had a failed install on OpenStack with Power9 KVM, and I had to redirect the Image Registry to use a different operator. Use the following storage class, you’ll have to change the default and names.
allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: standard-csi-new provisioner: cinder.csi.openstack.org reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: availability: nova
If you need to change the default-class, then:
oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
TIP: openshift-install router quota check
FATAL failed to fetch Cluster: failed to fetch dependency of “Cluster”: failed to generate asset “Platform Quota Check”: error(MissingQuota): Router is not available because the required number of resources (1) is more than remaining quota of 0
Then check the quota for the number of routers. You probably need to remove some old ones.
# openstack --os-cloud openstack quota show | grep router | routers | 15 |
-
A few weeks of notes
I’ve been working hard on multiarch enablement for various OpenShift features. Here are a few notes from the last few weeks:
New Blog on Cluster API
Delve into the powerful capabilities of Cluster API and how it enables effortless K8s cluster deployment on PowerVC: https://community.ibm.com/community/user/powerdeveloper/blogs/prajyot-parab/2023/05/31/simplifying-k8s-cluster-deployment-leveraging-capi
Prajyot Parat from the Kubernetes/OpenShift on Power teamIt’s a really helpful and interesting solution to deploying your cluster.
TIP: OpenShift Installer Provisioned Infrastructure on IBM Cloud
I installed a new cluster with the openshift-installer using IPI on IBM Cloud with a pre-defined VPC with predefined networks. If your install hangs and fails mysteriously after 30-40 minutes with three provisioned RHCOS nodes trying to call out to quay.io, it could point to the Public Gateway for the network not being enabled so it can call back to quay.io.
This issue was tough to debug, and I hope it helps you.
TIP: scp hangs because of bad mtu
The scp command opens the channel and hangs…
scp -vvv -i data/id_rsa sample.txt root@1.1.1.1:/tmp ... debug1: channel 0: free: client-session, nchannels 1 debug3: channel 0: status: The following connections are open: #0 client-session (t4 r0 i0/0 o0/0 e[write]/0 fd 6/7/8 sock -1 cc -1)
You can go check the optimal MTU to send to the destination.
# ping 1.1.1.1 -c 10 -M do -s 1499 PING 1.1.1.1 (1.1.1.1) 1499(1527) bytes of data. ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500 ping: local error: Message too long, mtu=1500
Then per the link https://unix.stackexchange.com/questions/14187/why-does-scp-hang-on-copying-files-larger-than-1405-bytes
ip link set eth0 mtu 1400
Then it’ll work.
The above will help when scp hangs.
Blog: Using the oc-compliance plugin on the OpenShift Container Platform on Power
My team has added support for oc-compliance on OpenShift Container Platform on IBM Power, and in this post, I’m sharing the download, the setup, and using the tool in the cluster.
https://community.ibm.com/community/user/powerdeveloper/blogs/aditi-jadhav/2023/06/07/using-the-oc-compliance-plugin-on-the-openshift-coThe oc-compliance plugin is super helpful, and my colleague Aditi has created a new blog on oc-compliance.
A blog on using seccomp with OCP4.
https://medium.com/@aditijadhav38/configuring-seccomp-profile-on-openshift-container-platform-for-security-and-compliance-on-power-d94907f4b1f9My teammate Aditi updated for 4.12 and 4.13 (surprise no changes, which is good).
-
Weekly Notes
Here are my weekly learnings and notes:
Podman Desktop updates v1.0.1
Podman Desktop is an open source graphical tool enabling you to seamlessly work with containers and Kubernetes from your local environment.
In a cool update, the Podman Desktop team added support for OpenShift Local in v1.0.1 and Kind clusters are already there. We can do some advanced stuff. You may have to download extensions and upgrade Podman to v4.5.0.
❯ brew upgrade podman-desktop ... 🍺 podman-desktop was successfully upgraded!
Skupper… interesting
Skupper is a layer 7 service interconnect. It enables secure communication across Kubernetes clusters with no VPNs or special firewall rules.
There is a new layer-7 interconnect. There is a sample
Red Hat OpenShift Container Platform 4.13.0 is generally available
I’ve been working on the product for 4.13.0 – oc new-app and new-build support.
Podman Cheat Sheet
Podman Cheat Sheet covers all the basic commands for managing images, containers, and container resources. Super helpful for those stuck finding the right command to build/manage or run your container.
File Integrity Operator: Using File Integrity Operator to support file integrity checks on OpenShift Container Platform on Power
My colleague has published a blog on File Integrity Operator.
As part of this series, I have written a blog on PCI-DSS and the Compliance Operator to have a secure and compliant cluster. Part of the cluster’s security and compliance depends on the File Integrity Operator – an operator that uses intrusion detection rules to verify the integrity of files and directories on cluster’s nodes.
https://community.ibm.com/community/user/powerdeveloper/blogs/aditi-jadhav/2023/05/24/using-file-integrity-operator-to-support-file-inte -
Weekly Notes
Here are my notes from the week:
- Subnet to CIDR block Cheat Sheet
- OpenShift Installer Provisioned Infrastructure for IBM Cloud VPC
rfc1878: Subnet CIDR Cheat Sheet
I found a great cheat sheet for CIDR subnet masks.
Mask value: # of Hex CIDR Decimal addresses Classfull 80.00.00.00 /1 128.0.0.0 2048 M 128 A C0.00.00.00 /2 192.0.0.0 1024 M 64 A E0.00.00.00 /3 224.0.0.0 512 M 32 A F0.00.00.00 /4 240.0.0.0 256 M 16 A F8.00.00.00 /5 248.0.0.0 128 M 8 A FC.00.00.00 /6 252.0.0.0 64 M 4 A FE.00.00.00 /7 254.0.0.0 32 M 2 A FF.00.00.00 /8 255.0.0.0 16 M 1 A FF.80.00.00 /9 255.128.0.0 8 M 128 B FF.C0.00.00 /10 255.192.0.0 4 M 64 B FF.E0.00.00 /11 255.224.0.0 2 M 32 B FF.F0.00.00 /12 255.240.0.0 1024 K 16 B FF.F8.00.00 /13 255.248.0.0 512 K 8 B FF.FC.00.00 /14 255.252.0.0 256 K 4 B FF.FE.00.00 /15 255.254.0.0 128 K 2 B FF.FF.00.00 /16 255.255.0.0 64 K 1 B FF.FF.80.00 /17 255.255.128.0 32 K 128 C FF.FF.C0.00 /18 255.255.192.0 16 K 64 C FF.FF.E0.00 /19 255.255.224.0 8 K 32 C FF.FF.F0.00 /20 255.255.240.0 4 K 16 C FF.FF.F8.00 /21 255.255.248.0 2 K 8 C FF.FF.FC.00 /22 255.255.252.0 1 K 4 C FF.FF.FE.00 /23 255.255.254.0 512 2 C FF.FF.FF.00 /24 255.255.255.0 256 1 C FF.FF.FF.80 /25 255.255.255.128 128 1/2 C FF.FF.FF.C0 /26 255.255.255.192 64 1/4 C FF.FF.FF.E0 /27 255.255.255.224 32 1/8 C FF.FF.FF.F0 /28 255.255.255.240 16 1/16 C FF.FF.FF.F8 /29 255.255.255.248 8 1/32 C FF.FF.FF.FC /30 255.255.255.252 4 1/64 C FF.FF.FF.FE /31 255.255.255.254 2 1/128 C FF.FF.FF.FF /32 255.255.255.255 1
Thanks to the following sites for the clue to the rfc and the rfc.
Mutating WebHook to add Node Selectors
Thanks to these sites
- hmcts/k8s-env-injector provided inspiration for this approach and updates the code patterns for the latest kubernetes versions.
- phenixblue/imageswap-webhook provided the python based pattern for this approach.
- Kubernetes: MutatingAdmissionWebhook
I added some code to add annotations and nodeSelectors https://github.com/prb112/openshift-demo/tree/main/mutating
Installing OpenShift install provisioned infrastructure on IBM Cloud VPC
This document outlines installing the IPI IBMCloud using the
openshift-installer
.As of OpenShift 4.13, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud VPC. The installation program provisions the required infrastructure, which you can then further customize.
This document describes the creation of OCP cluster using IPI (Installer Provisioned Infrastructure) on exiting IBM Cloud VPC.
This setup is used with the day-2 operations on PowerVS to make a multiarch compute cluster.
- Create IBM API Key
- Create the IAM Services
- Pick your build
- Deploy
1. Create IBM API Key
- Navigate to
API keys
iam – api keys - Click
Create
- Enter name
rdr-demo
- Click
Create
- Copy your API key, it’ll be used later on.
2. Create the IAM Services
- Navigate to
Service Ids
iam – serviceids - click create service id with name
rdr-demo
to identify your team. - assign access
Internet Services All Viewer, Operator, Editor, Reader, Writer, Manager, Administrator -- Cloud Object Storage All Viewer, Operator, Editor, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Administrator -- IAM Identity Service All Viewer, Operator, Editor, Administrator, ccoctlPolicy, policycreate -- Resource group only ocp-dev-resource-group resource group Viewer, Administrator, Editor, Operator -- VPC Infrastructure Services All Viewer, Operator, Editor, Reader, Writer, Administrator, Manager
3. Pick your build
I used 4.13.0-rc.7.
4. Deploy
- Connect to your jumpserver or bastion where you are doing the deployment.
Tip: it’s worth having
tmux
installed for this install (it’ll take about 1h30m)- Export the API KEY you created above
❯ export IC_API_KEY=<REDACTED>
- Create a working folder
❯ mkdir -p ipi-vpc-414-rc7 ❯ cd ipi-vpc-414-rc7
- Download the installers and extract to the binary folder.
❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/ccoctl-linux.tar.gz ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/openshift-client-linux.tar.gz ❯ curl -O -L https://mirror.openshift.com/pub/openshift-v4/amd64/clients/ocp/4.13.0-rc.7/openshift-install-linux.tar.gz ❯ tar xvf ccoctl-linux.tar.gz --dir /usr/local/bin/ ❯ tar xvf openshift-client-linux.tar.gz --dir /usr/local/bin/ ❯ tar xvf openshift-install-linux.tar.gz --dir /usr/local/bin/
- Verify the openshift-install version is correct.
❯ openshift-install version openshift-install 4.13.0-rc.7 built from commit 3e0b2a2ec26d9ffcca34b361896418499ad9d603 release image quay.io/openshift-release-dev/ocp-release@sha256:aae5131ec824c301c11d0bf11d81b3996a222be8b49ce4716e9d464229a2f92b release architecture amd64
- Copy over your pull-secret.
a. Login with your Red Hat id
b. Navigate to https://console.redhat.com/openshift/install/ibm-cloud
c. Scroll down the page and copy the pull-secret.
This pull-secret should work for you and save for later as
pull-secret.txt
in the working directory.- Extract the CloudControlsRequest objects and create the credentials.
RELEASE_IMAGE=$(openshift-install version | awk '/release image/ {print $3}') oc adm release extract --cloud=ibmcloud --credentials-requests $RELEASE_IMAGE --to=rdr-demo ccoctl ibmcloud create-service-id --credentials-requests-dir rdr-demo --output-dir rdr-demo-out --name rdr-demo --resource-group-name ocp-dev-resource-group
- Create the install-config
❯ openshift-install create install-config --dir rc7_2 ? SSH Public Key /root/.ssh/id_rsa.pub ? Platform ibmcloud ? Region jp-osa ? Base Domain ocp-multiarch.xyz (rdr-multi-is) ? Cluster Name rdr-multi-pb ? Pull Secret [? for help] ******************************************************************************** *********************************** INFO Manifests created in: rc7_1/manifests and rc7_1/openshift
- Edit the install-config.yaml to add
resourceGroupName
platform: ibmcloud: region: jp-osa resourceGroupName: my-resource-group
- Copy the generated ccoctl manifests over.
❯ cp rdr-demo-out/manifests/* rc7_1/manifests/
- Create the manifests.
❯ openshift-install create manifests --dir=rc7_1 INFO Consuming OpenShift Install (Manifests) from target directory INFO Manifests created in: rc7_1/manifests and rc7_1/openshift
- Create the cluster.
❯ openshift-install create cluster --dir=rc7_3 INFO Consuming Worker Machines from target directory INFO Consuming Common Manifests from target directory INFO Consuming Openshift Manifests from target directory INFO Consuming OpenShift Install (Manifests) from target directory INFO Consuming Master Machines from target directoryINFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.13-9.2/builds/413.92.202305021736-0/x86_64/rhcos-413.92.202305021736-0-ibmcloud.x86_64.qcow2.gz?sha256=222abce547c1bbf32723676f4977a3721c8a3788f0b7b6b3496b79999e8c60b3' INFO The file was found in cache: /root/.cache/openshift-installer/image_cache/rhcos-413.92.202305021736-0-ibmcloud.x86_64.qcow2. Reusing... INFO Creating infrastructure resources... INFO Waiting up to 20m0s (until 12:09PM) for the Kubernetes API at https://api.xyz.ocp-multiarch.xyz:6443... INFO API v1.26.3+b404935 up INFO Waiting up to 30m0s (until 12:19PM) for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 40m0s (until 12:41PM) for the cluster at https://api.xyz.ocp-multiarch.xyz:6443 to initialize... INFO Checking to see if there is a route at openshift-console/console... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ipi-vpc-414-rc7/rc7_3/auth/kubeconfig' INFO Access the OpenShift web-console here: INFO Login to the console with user: "kubeadmin", and password: "xxxxxxxxx-wwwwww-xxxx-aas" INFO Time elapsed: 1h28m9s
- Verify the cluster
a. set kubeconfig provided by installation
export KUBECONFIG=$(pwd)/rc7_1/auth/kubeconfig
b. Check the nodes are Ready
❯ oc get nodes NAME STATUS ROLES AGE VERSION rdr-multi-ca-rc6-tplwd-master-0 Ready control-plane,master 5h13m v1.26.3+b404935 rdr-multi-ca-rc6-tplwd-master-1 Ready control-plane,master 5h13m v1.26.3+b404935 rdr-multi-ca-rc6-tplwd-master-2 Ready control-plane,master 5h13m v1.26.3+b404935 rdr-multi-ca-rc6-tplwd-worker-1-pfqjx Ready worker 4h47m v1.26.3+b404935 rdr-multi-ca-rc6-tplwd-worker-1-th8j4 Ready worker 4h47m v1.26.3+b404935 rdr-multi-ca-rc6-tplwd-worker-1-xl75m Ready worker 4h53m v1.26.3+b404935
c. Check Cluster Operators
❯ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.13.0-rc.6 True False False 4h43m baremetal 4.13.0-rc.6 True False False 5h5m cloud-controller-manager 4.13.0-rc.6 True False False 5h13m cloud-credential 4.13.0-rc.6 True False False 5h18m cluster-autoscaler 4.13.0-rc.6 True False False 5h5m config-operator 4.13.0-rc.6 True False False 5h7m console 4.13.0-rc.6 True False False 4h47m control-plane-machine-set 4.13.0-rc.6 True False False 5h5m csi-snapshot-controller 4.13.0-rc.6 True False False 4h54m dns 4.13.0-rc.6 True False False 4h54m etcd 4.13.0-rc.6 True False False 4h57m image-registry 4.13.0-rc.6 True False False 4h50m ingress 4.13.0-rc.6 True False False 4h51m insights 4.13.0-rc.6 True False False 5h kube-apiserver 4.13.0-rc.6 True False False 4h53m kube-controller-manager 4.13.0-rc.6 True False False 4h53m kube-scheduler 4.13.0-rc.6 True False False 4h52m kube-storage-version-migrator 4.13.0-rc.6 True False False 4h54m machine-api 4.13.0-rc.6 True False False 4h48m machine-approver 4.13.0-rc.6 True False False 5h5m machine-config 4.13.0-rc.6 True False False 5h6m marketplace 4.13.0-rc.6 True False False 5h5m monitoring 4.13.0-rc.6 True False False 4h45m network 4.13.0-rc.6 True False False 5h8m node-tuning 4.13.0-rc.6 True False False 4h54m openshift-apiserver 4.13.0-rc.6 True False False 4h47m openshift-controller-manager 4.13.0-rc.6 True False False 4h54m openshift-samples 4.13.0-rc.6 True False False 4h50m operator-lifecycle-manager 4.13.0-rc.6 True False False 5h6m operator-lifecycle-manager-catalog 4.13.0-rc.6 True False False 5h6m operator-lifecycle-manager-packageserver 4.13.0-rc.6 True False False 4h51m service-ca 4.13.0-rc.6 True False False 5h7m storage 4.13.0-rc.6 True False False 4h51m
Note – Confirm that all master/worker nodes and operators are running healthy and true.
- Verify the browser login
A. Open Browser and Login to Console URL using available credentials. e.g.,
URL - https://console-openshift-console.apps.xxxxxx.ocp-multiarch.xyz Username – kubeadmin Password - <Generated Password>
- destroy cluster Fire below mentioned command to destroy cluster by specifying installation directory.
❯ ./openshift-install destroy cluster --dir ocp413-rc6 --log-level=debug
This should destroy all resources created for cluster. If you have provisioned other resources in the generated subnet, the destroy command will fail.
Notes
- You can use pre-provisioned VPC see https://docs.openshift.com/container-platform/4.12/installing/installing_ibm_cloud_public/installing-ibm-cloud-vpc.html#installing-ibm-cloud-vpc
- Cloud credential request – An admin will have to create these for you, and as such, you’ll need to copy them over to the right locations in manifests/
- use
--log-level debug
with the installer to inspect the run.
References