Tag: openshift
-
Krew plugin on ppc64le
Posted to https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/07/19/kubernetes-krew-plugin-support-for-power?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63
Hey everyone,
Krew, the kubectl plugin package manager, is now available on Power. The release v0.4.4 has a ppc64le download. You can download and start taking advantage of the krew plugin list. ppc64le download It also works with OpenShift.
The Krew website has a list of plugins](https://krew.sigs.k8s.io/plugins/). Not all of the plugins support ppc64le, however may are cross-arch scripts and are cross compiled such as
view-utilization
.To take advantage of Krew with OpenShift, here are a few steps
- Download the krew-linux plugin
# curl -L -O https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 3977k 100 3977k 0 0 6333k 0 --:--:-- --:--:-- --:--:-- 30.5M
- Extract the krew plugin
tar xvf krew-linux_ppc64le.tar.gz ./LICENSE ./krew-linux_ppc64le
- Move to the /usr/bin so it’s picked up by
oc
.
mv krew-linux_ppc64le /usr/bin/kubectl-krew
- Update the krew plugin
# kubectl krew update WARNING: To be able to run kubectl plugins, you need to add the following to your ~/.bash_profile or ~/.bashrc: export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" and restart your shell. Adding "default" plugin index from https://github.com/kubernetes-sigs/krew-index.git. Updated the local copy of plugin index.
- Update your shell:
# echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc
- Restart your session (exit and come back to the shell so the variables are loaded)
- Try oc krew list
# oc krew list PLUGIN VERSION
- List all the plugins that support ppc64le.
# oc krew search | grep -v 'unavailable on linux/ppc64le' NAME DESCRIPTION INSTALLED allctx Run commands on contexts in your kubeconfig no assert Assert Kubernetes resources no bulk-action Do bulk actions on Kubernetes resources. no ... tmux-exec An exec multiplexer using Tmux no view-utilization Shows cluster cpu and memory utilization no
- Install a plugin
# oc krew install view-utilization Updated the local copy of plugin index. Installing plugin: view-utilization Installed plugin: view-utilization \ | Use this plugin: | kubectl view-utilization | Documentation: | https://github.com/etopeter/kubectl-view-utilization | Caveats: | \ | | This plugin needs the following programs: | | * bash | | * awk (gawk,mawk,awk) | / / WARNING: You installed plugin "view-utilization" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk.
- Use the plugin.
# oc view-utilization Resource Requests %Requests Limits %Limits Allocatable Schedulable Free CPU 7521 16 2400 5 45000 37479 37479 Memory 33477885952 36 3774873600 4 92931489792 59453603840 59453603840
Tip: There are many more plugins with ppc64le support and do not have the krew manifest updated.
Thanks to PR 755 we have support for ppc64le.
References
https://github.com/kubernetes-sigs/krew/blob/v0.4.4/README.md
https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_ppc64le.tar.gz
-
A few more notes from the week
A few things I learned about this week are:
IBM Redbooks: Implementing, Tuning, and Optimizing Workloads with Red Hat OpenShift on IBM Power
A new document provides hints and tips about how to install your Red Hat OpenShift cluster, and also provide guidance about how to size and tune your environment. I’m reading through it now – and excited.
Upcoming Webinar: Powering AI Innovation: Exploring IBM Power with MMA and ONNX on Power10 Featuring Real Time Use Cases
The session is going to explore showcase the impressive capabilities of MMA (Matrix Math Accelerator) on the cutting-edge Power10 architecture.
CSI Cinder Configuration for a different availability zone
I had a failed install on OpenStack with Power9 KVM, and I had to redirect the Image Registry to use a different operator. Use the following storage class, you’ll have to change the default and names.
allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: standard-csi-new provisioner: cinder.csi.openstack.org reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: availability: nova
If you need to change the default-class, then:
oc patch storageclass standard-csi -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
TIP: openshift-install router quota check
FATAL failed to fetch Cluster: failed to fetch dependency of “Cluster”: failed to generate asset “Platform Quota Check”: error(MissingQuota): Router is not available because the required number of resources (1) is more than remaining quota of 0
Then check the quota for the number of routers. You probably need to remove some old ones.
# openstack --os-cloud openstack quota show | grep router | routers | 15 |
-
Weekly Notes
Here are my weekly learnings and notes:
Podman Desktop updates v1.0.1
Podman Desktop is an open source graphical tool enabling you to seamlessly work with containers and Kubernetes from your local environment.
In a cool update, the Podman Desktop team added support for OpenShift Local in v1.0.1 and Kind clusters are already there. We can do some advanced stuff. You may have to download extensions and upgrade Podman to v4.5.0.
❯ brew upgrade podman-desktop ... 🍺 podman-desktop was successfully upgraded!
Skupper… interesting
Skupper is a layer 7 service interconnect. It enables secure communication across Kubernetes clusters with no VPNs or special firewall rules.
There is a new layer-7 interconnect. There is a sample
Red Hat OpenShift Container Platform 4.13.0 is generally available
I’ve been working on the product for 4.13.0 – oc new-app and new-build support.
Podman Cheat Sheet
Podman Cheat Sheet covers all the basic commands for managing images, containers, and container resources. Super helpful for those stuck finding the right command to build/manage or run your container.
File Integrity Operator: Using File Integrity Operator to support file integrity checks on OpenShift Container Platform on Power
My colleague has published a blog on File Integrity Operator.
As part of this series, I have written a blog on PCI-DSS and the Compliance Operator to have a secure and compliant cluster. Part of the cluster’s security and compliance depends on the File Integrity Operator – an operator that uses intrusion detection rules to verify the integrity of files and directories on cluster’s nodes.
https://community.ibm.com/community/user/powerdeveloper/blogs/aditi-jadhav/2023/05/24/using-file-integrity-operator-to-support-file-inte -
Development Notes
Here are some things I found interesting this week:
Day-0 Day-1 Day-2 Definitions
day-0: customized installation
day-1: customization performed only once after installing a cluster,
day-2: tasks performed multiple times during the life of a clusterThanks to a Red Hat colleague for this wonderful definition.
sfdisk tips
I used sfdisk in a PowerVM project. I found these commands helpful
# sfdisk --json /dev/mapper/mpatha { "partitiontable": { "label": "dos", "id": "0x14fc63d2", "device": "/dev/mapper/mpatha", "unit": "sectors", "partitions": [ {"node": "/dev/mapper/mpatha1", "start": 2048, "size": 8192, "type": "41", "bootable": true}, {"node": "/dev/mapper/mpatha2", "start": 10240, "size": 251647967, "type": "83"} ] } } # sfdisk --dump /dev/mapper/mpatha label: dos label-id: 0x14fc63d2 device: /dev/mapper/mpatha unit: sectors /dev/mapper/mpatha1 : start= 2048, size= 8192, type=41, bootable /dev/mapper/mpatha2 : start= 10240, size= 251647967, type=83
https://www.computerhope.com/unix/sfdisk.htm
Red Hat OpenShift Container Platform IPI Config for IBM Cloud VPC
I generated an example configuration
additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: ocp-power.xyz credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ibmcloud: zones: - jp-osa-1 replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 3 platform: ibmcloud: zones: - jp-osa-1 metadata: name: rdr-test networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 zones: - jp-osa2-1 platform: ibmcloud: region: jp-osa resourceGroupName: dev-resource-group vpcName: ma-compute-vpc controlPlaneSubnets: - ma-compute-sn1 computeSubnets: - ma-compute-sn1 publish: External pullSecret: 'XYZWX' sshKey: ssh-ed25519 XYZWX fips: false
-
Weekly Notes
For the week, I ran across a few things and wrote one blog for the IBM Power Developer Exchange.
a. get the coreos container
STREAM="stable" coreos-installer download -s "${STREAM}" -p qemu -f qcow2.xz --decompress -C ~/.local/share/libvirt/images/
b. create the qemu vm
IGNITION_CONFIG="/path/to/example.ign" IMAGE="/path/to/image.qcow2" # for s390x/ppc64le: IGNITION_DEVICE_ARG="-drive file=${IGNITION_CONFIG},if=none,format=raw,readonly=on,id=ignition -device virtio-blk,serial=ignition,drive=ignition" qemu-kvm -m 2048 -cpu host -nographic -snapshot \ -drive if=virtio,file=${IMAGE} ${IGNITION_DEVICE_ARG} -nic user,model=virtio,hostfwd=tcp::2222-:22
- coreos/butane has a nice getting started page with advanced configuration for OpenShift 4.13 link
- A curated list of OCP Security and Compliance info
As you define, build, and run your OpenShift Container Platform cluster, you should be aware of the rich security features available. Here is a curated list of security and compliance focused resources on topics from configuring FIPS to using the Compliance Operator on the Power Platform: https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2023/03/29/the-blog-of-blogs-security-and-compliance-resource #PDeX #IBM #IBMPower #RedHat #OpenShift #Containers #Security #Compliance
-
Handy Dandy Tricks for the Week
As I work with Linux and the OpenShift Container Platform, I run into some handy tips, tricks and patterns that make my job easier, and I wanted to share with you all (and so I can find these details in the future):
1. Verifying etcd data is encrypted
The etcd depends on the Kube-APIServer and OpenShift-APIServer to encrypt a subset of the API Resources, and transparently with
AES-CBC
(and eventuallyAES-GCM
). The subset includes: Secrets, Config maps, Routes, OAuth access tokens, OAuth authorize tokens.To verify the the contents are actually encrypted one can look at https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted or https://docs.openshift.com/container-platform/4.12/security/encrypting-etcd.html
2. Using a Pod to run lsof across the whole Node
You can create a debug Pod with
lsof
.- Create a debug Pod with elevated privileges.
cat << EOF > bpftrace-diagnostic-0.yaml kind: Pod apiVersion: v1 metadata: annotations: openshift.io/scc: node-exporter name: bpftrace-diagnostic-0 namespace: openshift-etcd spec: nodeSelector: kubernetes.io/os: linux restartPolicy: Always priority: 2000000000 schedulerName: default-scheduler enableServiceLinks: true terminationGracePeriodSeconds: 30 preemptionPolicy: PreemptLowerPriority containers: - name: diagnostic image: quay.io/centos/centos:stream8 imagePullPolicy: IfNotPresent command: [ "sh", "-c", "sleep inf" ] resources: requests: cpu: 1000m memory: 2048Mi volumeMounts: - name: host-sys mountPath: /sys terminationMessagePath: /dev/termination-log securityContext: privileged: true seccompProfile: type: RuntimeDefault capabilities: add: - CAP_SYS_ADMIN - CAP_FOWNER - NET_ADMIN - SYS_ADMIN drop: - ALL runAsUser: 0 runAsGroup: 0 runAsNonRoot: false readOnlyRootFilesystem: false allowPrivilegeEscalation: true volumes: - name: host-sys hostPath: path: /sys type: Directory nodeName: master-0 priorityClassName: system-cluster-critical hostPID: true hostIPC: true hostNetwork: true EOF
- Delete an existing pod and apply a new pod.
oc delete pod/bpftrace-diagnostic-0; oc apply -f bpftrace-diagnostic-0.yaml
- Connect to the Pod
oc rsh pod/bpftrace-diagnostic-0
- You can install
lsof
and verify the files open with a specific process.
❯ dnf install -y lsof ❯ ps -ef | grep 'etcd ' root 1236199 1235495 0 16:31 pts/1 00:00:00 grep etcd root 2273604 2273591 2 Mar17 ? 02:41:43 etcd grpc-proxy start --endpoints ........ ❯ lsof +p 2273591
With lsof you can see the open files and connections for that process.
3. Verifying FIPS usage with GO
Red Hat’s development site has an awesome article on Is your Go application FIPS compliant?. It shows you how to confirm if your application has FIPS/uses FIPS.
To confirm my container used FIPS, I connected to the project, grabbed the files off the container, and saw the libraries referenced.
$ oc project myapp $ oc get pods $ oc cp app-5d69757bd6-57g9x:/usr/bin/app /tmp/app $ go tool nm /tmp/app | grep FIPS 11b5a4b0 T _cgo_af9b144e6dc6_Cfunc__goboringcrypto_FIPS_mode 138c30d8 d _g_FIPS_mode 137c2570 d crypto/tls.defaultCipherSuitesFIPS 137c25f0 d crypto/tls.defaultFIPSCipherSuitesTLS13 137c2610 d crypto/tls.defaultFIPSCurvePreferences 102007d0 t vendor/github.com/golang-fips/openssl-fips/openssl.enableBoringFIPSMode 138c3467 d vendor/github.com/golang-fips/openssl-fips/openssl.strictFIPS
Thus, you can show that it uses FIPS mode.
4. Verifying the ignition contents from a qcow2 image
I was playing with Single Node OpenShift, and needed to see how out of date my code and igntion file were.
Checking this iso tells me your RHCOS is out-of-date (about 4 months):
isoinfo -l -i ./rhcos-live.iso | grep -i IGN ---------- 0 0 0 262144 Dec 13 2022 [ 2378 00] IGNITION.IMG;1
I checked the qcow2 image that the iso was converted to, and the mounted it so I could extract some files:
❯ modprobe nbd max_part=8 ❯ qemu-nbd --connect=/dev/nbd0 /var/lib/libvirt/images/rhcos-live-2023-03-21.qcow2 ❯ fdisk /dev/nbd0 -l ❯ mkdir -p /mnt/tmp-f ❯ mount /dev/nbd0p1 /mnt/tmp-f ❯ find /mnt/tmp-f/ /mnt/tmp-f/ /mnt/tmp-f/coreos /mnt/tmp-f/coreos/features.json /mnt/tmp-f/coreos/kargs.json /mnt/tmp-f/coreos/miniso.dat /mnt/tmp-f/EFI /mnt/tmp-f/EFI/redhat /mnt/tmp-f/EFI/redhat/grub.cfg /mnt/tmp-f/images /mnt/tmp-f/images/efiboot.img /mnt/tmp-f/images/ignition.img /mnt/tmp-f/images/pxeboot /mnt/tmp-f/images/pxeboot/initrd.img /mnt/tmp-f/images/pxeboot/rootfs.img /mnt/tmp-f/images/pxeboot/vmlinuz /mnt/tmp-f/isolinux /mnt/tmp-f/isolinux/boot.cat /mnt/tmp-f/isolinux/boot.msg /mnt/tmp-f/isolinux/isolinux.bin /mnt/tmp-f/isolinux/isolinux.cfg /mnt/tmp-f/isolinux/ldlinux.c32 /mnt/tmp-f/isolinux/libcom32.c32 /mnt/tmp-f/isolinux/libutil.c32 /mnt/tmp-f/isolinux/vesamenu.c32 /mnt/tmp-f/zipl.prm
from link I learned I could scan the
ignition.img
for ‘fyre’ which was part of the ssh public key.I then catted out the ingintion file to confirm it had the updated content.
❯ xzcat /mnt/tmp-f/images/ignition.img | grep myid | wc -l 1
This shows how to dig into the contents.
5. List of Operating System Short Ids for coreos
I need to find the
os-variant
for coreos, so I could use it on libvirt. Fortunately, the libosinfo package provides a tool the details on the oeprating systems wihtosinfo-query os
.❯ osinfo-query os | grep coreos fedora-coreos-next | Fedora CoreOS | Next | http://fedoraproject.org/coreos/next fedora-coreos-stable | Fedora CoreOS | Stable | http://fedoraproject.org/coreos/stable fedora-coreos-testing | Fedora CoreOS | Testing | http://fedoraproject.org/coreos/testing
I then added
--os-variant=fedora-coreos-next
to myvirt-install
command.6. Using qemu-kvm with a port number less than 1024
I needed to forward a privileged port, and was blocked with
qemu-kvm
andvirt-install
. I thought there was another process that might be listening on the port… so I checked…❯ netstat -plutn | grep 443 tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 1026882/qemu-kvm
It was not, and it turned out to be a known issue with
qemu-kvm
andselinux
.The ports < 1024 are privileged, and only a root process (or a process with CAP_NET_BIND_SERVICE capabilities in Linux) can bind a socket to them. link to article
I ran the following:
❯ setcap CAP_NET_BIND_SERVICE+ep /usr/libexec/qemu-kvm
I recreated the image with
virt-install
and--network network=default,model=e1000,hostfwd=tcp::443-:443,hostfwd=tcp::6443-:6443'
Good luck, it worked so far for me.
7. Using bpftrace with a Pod
You can use
bpftrace
to do analysis at the Kernel level (some kprobes are not available).- Create a debug
Pod
with elevated privileges.
cat << EOF > bpftrace-diagnostic-0.yaml kind: Pod apiVersion: v1 metadata: annotations: openshift.io/scc: node-exporter name: bpftrace-diagnostic-0 namespace: openshift-etcd spec: nodeSelector: kubernetes.io/os: linux restartPolicy: Always priority: 2000000000 schedulerName: default-scheduler enableServiceLinks: true terminationGracePeriodSeconds: 30 preemptionPolicy: PreemptLowerPriority containers: - name: diagnostic image: quay.io/centos/centos:stream8 imagePullPolicy: IfNotPresent command: [ "sh", "-c", "sleep inf" ] resources: requests: cpu: 1000m memory: 2048Mi volumeMounts: - name: host-sys mountPath: /sys terminationMessagePath: /dev/termination-log securityContext: privileged: true seccompProfile: type: RuntimeDefault capabilities: add: - CAP_SYS_ADMIN - CAP_FOWNER - NET_ADMIN - SYS_ADMIN drop: - ALL runAsUser: 0 runAsGroup: 0 runAsNonRoot: false readOnlyRootFilesystem: false allowPrivilegeEscalation: true volumes: - name: host-sys hostPath: path: /sys type: Directory nodeName: master-0 priorityClassName: system-cluster-critical hostPID: true hostIPC: true hostNetwork: true EOF
- Delete an existing Pod and apply a new Pod.
oc delete pod/bpftrace-diagnostic-0; oc apply -f bpftrace-diagnostic-0.yaml
- Connect to the Pod
oc rsh pod/bpftrace-diagnostic-0
- Check the file open with bpftrace
dnf install -y bpftrace bpftrace -e 'tracepoint:syscalls:sys_enter_open { printf("%s %s\n", comm, str(args->filename));}'
You can see files open.
❯ bpftrace -e 'kprobe:vfs_open { printf("open path: %s\n", str(((struct path *)arg0)->dentry->d_name.name)); }' | grep so | grep cry open path: libgcrypt.so.20.2.5 open path: libgcrypt.so.20.2.5 open path: libgcrypt.so.20.2.5 open path: libgcrypt.so.20.2.5 open path: .libgcrypt.so.20.hmac open path: .libgcrypt.so.20.hmac open path: libcrypto.so.1.1.1k
You can see FIPS checks.
❯ bpftrace -e 'kprobe:vfs_open { printf("open path: %s\n", str(((struct path *)arg0)->dentry->d_name.name)); }' | grep fips open path: fips_enabled open path: fips_enabled
Link to bpftrace docs https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#2-kprobekretprobe-dynamic-tracing-kernel-level-arguments
You can see the cool things happening on your Node.
-
Things I learned this week
Things that I’ve learned for the week:
- containerbuildsystem/atomic-reactor recently released v4.5.0 which is a simple python library for building docker images – supporting OSBS 2.0 components.
- Redbooks: IBM Cloud Pak for Data on IBM zSystems is a high-level overview of IBM zSystems with Cloud Pak for Data.
- Redbooks: Introduction to IBM PowerVM introduces PowerVM virtualization technologies on Power servers.
- operator-framework/operator-sdk v1.28.0 is released. On a Mac, use
brew upgrade operator-sdk
. - LinkedIn: Treating IBM Power like cattle will help you modernize! is a new blog from an IBMer which highlights a mentality switch:
Changing how you view your solutions can free you to modernise, evolve, and detect the challenges that lie ahead from a far better vantage point.
- k8s.gcr.io being redirected and may have some consequences on your environment.
On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to registry.k8s.io with the eventual goal of sunsetting k8s.gcr.io.
- I was told about a very cool link to see all the various SIGs link
-
Things for the Week
This week I learned a few things of interest:
In close collaboration with Red Hat the IBM Power Ecosystem team has continued efforts to enable and advance products running on the Power platform. Click here to review the new releases in February: https://community.ibm.com/community/user/powerdeveloper/blogs/ashwini-sule/2023/03/02/red-hat-products-february-2023-releases
IBM Power Developer ExchangeI found the list helpful when using the Power architecture and OpenShift.
Want to develop applications with Red Hat OpenShift Dev Spaces but don’t know where to start? This blog outlines the step-by-step process for installing OpenShift Dev Spaces on the Red Hat OpenShift Container Platform on IBM Power: https://community.ibm.com/community/user/powerdeveloper/blogs/sachin-itagi/2023/03/03/developing-net-applications-on-ibm-power-using-vis
IBM Power Developer ExchangeI haven’t stayed current on all the cool things in OpenShift, I thought this one held the most promise for end-to-end devops.
I needed to figure out why my Worker’s networking was disconnected from the network:
oc get nodes ssh core@osa21-worker-1.sslip.io nmcli device nmcli con reload env3 nslookup quay.io
After the restart the networking worked. It told me there was something wrong with the local networking, so I checked the DNS Operator. I had to restart the operator and make some changes to a DNS server that was actually up.
If you hit some networking issues, the above will help.
You can solve the multi-architecture multi-image problem when automating and sharing images across IBM Power and x86 with container manifests. Learn how here: https://community.ibm.com/community/user/powerdeveloper/viewdocument/build-multi-architecture-container?CommunityKey=2d4070a1-ca52-4e83-8efb-02b41c42459e&tab=librarydocuments
IBM Power Developer ExchangeIf you need to build manifest images the above is very helpful.
-
How to grab ignition files
I was helping a colleague grab the latest ignition files for his OpenShift Container Platform workers.
- Connect to the bastion
❯ ssh root@<hostname>
- List the master nodes and select a master node
❯ oc get nodes -lnode-role.kubernetes.io/master= NAME STATUS ROLES AGE VERSION master-0.ocp-power.xyz Ready control-plane,master 5d19h v1.25.2+7dab57f master-1.ocp-power.xyz Ready control-plane,master 5d19h v1.25.2+7dab57f master-2.ocp-power.xyz Ready control-plane,master 5d19h v1.25.2+7dab57f
- Get the IP address
❯ oc get nodes master-2.ocp-power.xyz -o json | jq -r .status.addresses [ { "address": "192.168.100.240", "type": "InternalIP" }, { "address": "master-2.ocp-power.xyz", "type": "Hostname" } ]
- The Machine Config Server has the latest
igntion
file
❯ curl https://192.168.100.240:22623/config/worker -k -H "Accept: application/vnd.coreos.ignition+json;version=3.2.0" | jq -r . > worker-$(date +%Y-%m-%d).ign
worker
can be replaced withmaster
or any otherMachineConfigPool
The machine-config-server is hosted on each of the master nodes and the bootstrap node.
Note, this makes it download ignition version 3.2.0.
- Download the ignition file
❯ scp root@<hostname>:'~'/worker-$(date +%Y-%m-%d).ign . worker-2023-03-02.ign 100% 357KB 249.3KB/s 00:01
You can use this file for your work with worker ignition.