Handy Dandy Tricks for the Week

As I work with Linux and the OpenShift Container Platform, I run into some handy tips, tricks and patterns that make my job easier, and I wanted to share with you all (and so I can find these details in the future):

1. Verifying etcd data is encrypted

The etcd depends on the Kube-APIServer and OpenShift-APIServer to encrypt a subset of the API Resources, and transparently with AES-CBC (and eventually AES-GCM). The subset includes: Secrets, Config maps, Routes, OAuth access tokens, OAuth authorize tokens.

To verify the the contents are actually encrypted one can look at https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted or https://docs.openshift.com/container-platform/4.12/security/encrypting-etcd.html

2. Using a Pod to run lsof across the whole Node

You can create a debug Pod with lsof.

  1. Create a debug Pod with elevated privileges.
cat << EOF > bpftrace-diagnostic-0.yaml
kind: Pod
apiVersion: v1
metadata:
  annotations:
    openshift.io/scc: node-exporter
  name: bpftrace-diagnostic-0
  namespace: openshift-etcd
spec:
  nodeSelector:
    kubernetes.io/os: linux
  restartPolicy: Always
  priority: 2000000000
  schedulerName: default-scheduler
  enableServiceLinks: true
  terminationGracePeriodSeconds: 30
  preemptionPolicy: PreemptLowerPriority
  containers:
    - name: diagnostic
      image: quay.io/centos/centos:stream8
      imagePullPolicy: IfNotPresent
      command: [ "sh", "-c", "sleep inf" ]
      resources:
        requests:
          cpu: 1000m
          memory: 2048Mi
      volumeMounts:
      - name: host-sys
        mountPath: /sys
      terminationMessagePath: /dev/termination-log
      securityContext:
        privileged: true
        seccompProfile:
          type: RuntimeDefault
        capabilities:
          add:
            - CAP_SYS_ADMIN
            - CAP_FOWNER
            - NET_ADMIN
            - SYS_ADMIN
          drop:
            - ALL
        runAsUser: 0
        runAsGroup: 0
        runAsNonRoot: false
        readOnlyRootFilesystem: false
        allowPrivilegeEscalation: true
  volumes:
  - name: host-sys
    hostPath:
      path: /sys
      type: Directory
  nodeName: master-0
  priorityClassName: system-cluster-critical
  hostPID: true
  hostIPC: true
  hostNetwork: true
EOF
  1. Delete an existing pod and apply a new pod.
oc delete pod/bpftrace-diagnostic-0; oc apply -f bpftrace-diagnostic-0.yaml
  1. Connect to the Pod
oc rsh pod/bpftrace-diagnostic-0
  1. You can install lsof and verify the files open with a specific process.
❯ dnf install -y lsof
❯ ps -ef | grep 'etcd '
root     1236199 1235495  0 16:31 pts/1    00:00:00 grep etcd 
root     2273604 2273591  2 Mar17 ?        02:41:43 etcd grpc-proxy start --endpoints ........
❯  lsof +p 2273591

With lsof you can see the open files and connections for that process.

3. Verifying FIPS usage with GO

Red Hat’s development site has an awesome article on Is your Go application FIPS compliant?. It shows you how to confirm if your application has FIPS/uses FIPS.

To confirm my container used FIPS, I connected to the project, grabbed the files off the container, and saw the libraries referenced.

$ oc project myapp
$ oc get pods
$ oc cp app-5d69757bd6-57g9x:/usr/bin/app /tmp/app
$ go tool nm /tmp/app | grep FIPS
11b5a4b0 T _cgo_af9b144e6dc6_Cfunc__goboringcrypto_FIPS_mode
138c30d8 d _g_FIPS_mode
137c2570 d crypto/tls.defaultCipherSuitesFIPS
137c25f0 d crypto/tls.defaultFIPSCipherSuitesTLS13
137c2610 d crypto/tls.defaultFIPSCurvePreferences
102007d0 t vendor/github.com/golang-fips/openssl-fips/openssl.enableBoringFIPSMode
138c3467 d vendor/github.com/golang-fips/openssl-fips/openssl.strictFIPS

Thus, you can show that it uses FIPS mode.

4. Verifying the ignition contents from a qcow2 image

I was playing with Single Node OpenShift, and needed to see how out of date my code and igntion file were.

Checking this iso tells me your RHCOS is out-of-date (about 4 months):

isoinfo -l -i ./rhcos-live.iso  | grep -i IGN
----------   0    0    0          262144 Dec 13 2022 [   2378 00]  IGNITION.IMG;1 

I checked the qcow2 image that the iso was converted to, and the mounted it so I could extract some files:

❯ modprobe nbd max_part=8
❯ qemu-nbd --connect=/dev/nbd0 /var/lib/libvirt/images/rhcos-live-2023-03-21.qcow2
❯ fdisk /dev/nbd0 -l
❯ mkdir -p /mnt/tmp-f
❯ mount /dev/nbd0p1 /mnt/tmp-f
❯ find /mnt/tmp-f/
/mnt/tmp-f/
/mnt/tmp-f/coreos
/mnt/tmp-f/coreos/features.json
/mnt/tmp-f/coreos/kargs.json
/mnt/tmp-f/coreos/miniso.dat
/mnt/tmp-f/EFI
/mnt/tmp-f/EFI/redhat
/mnt/tmp-f/EFI/redhat/grub.cfg
/mnt/tmp-f/images
/mnt/tmp-f/images/efiboot.img
/mnt/tmp-f/images/ignition.img
/mnt/tmp-f/images/pxeboot
/mnt/tmp-f/images/pxeboot/initrd.img
/mnt/tmp-f/images/pxeboot/rootfs.img
/mnt/tmp-f/images/pxeboot/vmlinuz
/mnt/tmp-f/isolinux
/mnt/tmp-f/isolinux/boot.cat
/mnt/tmp-f/isolinux/boot.msg
/mnt/tmp-f/isolinux/isolinux.bin
/mnt/tmp-f/isolinux/isolinux.cfg
/mnt/tmp-f/isolinux/ldlinux.c32
/mnt/tmp-f/isolinux/libcom32.c32
/mnt/tmp-f/isolinux/libutil.c32
/mnt/tmp-f/isolinux/vesamenu.c32
/mnt/tmp-f/zipl.prm

from link I learned I could scan the ignition.img for ‘fyre’ which was part of the ssh public key.

I then catted out the ingintion file to confirm it had the updated content.

❯ xzcat /mnt/tmp-f/images/ignition.img | grep myid | wc -l
1

This shows how to dig into the contents.

5. List of Operating System Short Ids for coreos

I need to find the os-variant for coreos, so I could use it on libvirt. Fortunately, the libosinfo package provides a tool the details on the oeprating systems wiht osinfo-query os.

❯ osinfo-query os | grep coreos
 fedora-coreos-next   | Fedora CoreOS  | Next    | http://fedoraproject.org/coreos/next    
 fedora-coreos-stable | Fedora CoreOS  | Stable  | http://fedoraproject.org/coreos/stable  
 fedora-coreos-testing | Fedora CoreOS | Testing | http://fedoraproject.org/coreos/testing 

I then added --os-variant=fedora-coreos-next to my virt-install command.

6. Using qemu-kvm with a port number less than 1024

I needed to forward a privileged port, and was blocked with qemu-kvm and virt-install. I thought there was another process that might be listening on the port… so I checked…

❯ netstat -plutn | grep 443
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1026882/qemu-kvm    

It was not, and it turned out to be a known issue with qemu-kvm and selinux.

The ports < 1024 are privileged, and only a root process (or a process with CAP_NET_BIND_SERVICE capabilities in Linux) can bind a socket to them. link to article

I ran the following:

❯ setcap CAP_NET_BIND_SERVICE+ep /usr/libexec/qemu-kvm

I recreated the image with virt-install and --network network=default,model=e1000,hostfwd=tcp::443-:443,hostfwd=tcp::6443-:6443'

Good luck, it worked so far for me.

7. Using bpftrace with a Pod

You can use bpftrace to do analysis at the Kernel level (some kprobes are not available).

  1. Create a debug Pod with elevated privileges.
cat << EOF > bpftrace-diagnostic-0.yaml
kind: Pod
apiVersion: v1
metadata:
  annotations:
    openshift.io/scc: node-exporter
  name: bpftrace-diagnostic-0
  namespace: openshift-etcd
spec:
  nodeSelector:
    kubernetes.io/os: linux
  restartPolicy: Always
  priority: 2000000000
  schedulerName: default-scheduler
  enableServiceLinks: true
  terminationGracePeriodSeconds: 30
  preemptionPolicy: PreemptLowerPriority
  containers:
    - name: diagnostic
      image: quay.io/centos/centos:stream8
      imagePullPolicy: IfNotPresent
      command: [ "sh", "-c", "sleep inf" ]
      resources:
        requests:
          cpu: 1000m
          memory: 2048Mi
      volumeMounts:
      - name: host-sys
        mountPath: /sys
      terminationMessagePath: /dev/termination-log
      securityContext:
        privileged: true
        seccompProfile:
          type: RuntimeDefault
        capabilities:
          add:
            - CAP_SYS_ADMIN
            - CAP_FOWNER
            - NET_ADMIN
            - SYS_ADMIN
          drop:
            - ALL
        runAsUser: 0
        runAsGroup: 0
        runAsNonRoot: false
        readOnlyRootFilesystem: false
        allowPrivilegeEscalation: true
  volumes:
  - name: host-sys
    hostPath:
      path: /sys
      type: Directory
  nodeName: master-0
  priorityClassName: system-cluster-critical
  hostPID: true
  hostIPC: true
  hostNetwork: true
EOF
  1. Delete an existing Pod and apply a new Pod.
oc delete pod/bpftrace-diagnostic-0; oc apply -f bpftrace-diagnostic-0.yaml
  1. Connect to the Pod
oc rsh pod/bpftrace-diagnostic-0
  1. Check the file open with bpftrace
dnf install -y bpftrace
bpftrace -e 'tracepoint:syscalls:sys_enter_open  { printf("%s %s\n", comm, str(args->filename));}'

You can see files open.

❯ bpftrace -e 'kprobe:vfs_open { printf("open path: %s\n", str(((struct path *)arg0)->dentry->d_name.name)); }' | grep so | grep cry
open path: libgcrypt.so.20.2.5
open path: libgcrypt.so.20.2.5
open path: libgcrypt.so.20.2.5
open path: libgcrypt.so.20.2.5
open path: .libgcrypt.so.20.hmac
open path: .libgcrypt.so.20.hmac
open path: libcrypto.so.1.1.1k

You can see FIPS checks.

❯ bpftrace -e 'kprobe:vfs_open { printf("open path: %s\n",                                  str(((struct path *)arg0)->dentry->d_name.name)); }' | grep fips         
open path: fips_enabled
open path: fips_enabled

Link to bpftrace docs https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#2-kprobekretprobe-dynamic-tracing-kernel-level-arguments

You can see the cool things happening on your Node.


Posted

in

,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.