Author: Paul

  • How to use OpenScap Scanner on a Mac

    For those, not yet using openscap-scanner on their systems, OpenSCAP is an security auditing framework that utilizes the Extensible Configuration Checklist Description Format (XCCDF) and the openscap-scanner executes over the security profile on a target system.

    One gotcha, I have a Mac, and the tool is not natively supported on the Mac. I decided to use it through a fedora container running in Podman.

    Here are the steps to running on a Mac with complianceascode/content‘s release.

    Steps

    1. Download the Docker File
    2. Build the Image
    $ podman build -f Dockerfile -t ocp-power.xyz/compliance/openscap-wrapper:latest
    ...
    
    1. Download the content files scap-security-guide-0.1.65.zip
    $ curl -O -L https://github.com/ComplianceAsCode/content/releases/download/v0.1.65/scap-security-guide-0.1.65.zip
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    100  130M  100  130M    0     0  2752k      0  0:00:48  0:00:48 --:--:-- 5949k
    
    1. Unzip the scap-security-guide-0.1.65.zip file.
    $ unzip scap-security-guide-0.1.65.zip
    
    1. Rename the directory scap-security-guide-0.1.65 to scap
    $ mv scap-security-guide-0.1.65 scap
    
    1. List the profiles in a specific XML.
    $ podman run --rm -v ./scap:/scap ocp-power.xyz/compliance/openscap-wrapper:latest oscap info --profiles /scap/ssg-ocp4-ds.xml
    xccdf_org.ssgproject.content_profile_cis-node:CIS Red Hat OpenShift Container Platform 4 Benchmark
    xccdf_org.ssgproject.content_profile_cis:CIS Red Hat OpenShift Container Platform 4 Benchmark
    xccdf_org.ssgproject.content_profile_e8:Australian Cyber Security Centre (ACSC) Essential Eight
    xccdf_org.ssgproject.content_profile_high-node:NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level
    xccdf_org.ssgproject.content_profile_high:NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level
    xccdf_org.ssgproject.content_profile_moderate-node:NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level
    xccdf_org.ssgproject.content_profile_moderate:NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level
    xccdf_org.ssgproject.content_profile_nerc-cip-node:North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the  Red Hat OpenShift Container Platform - Node level
    xccdf_org.ssgproject.content_profile_nerc-cip:North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the  Red Hat OpenShift Container Platform - Platform level
    xccdf_org.ssgproject.content_profile_pci-dss-node:PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
    xccdf_org.ssgproject.content_profile_pci-dss:PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
    
    1. Details on the profile
    $ podman run --rm  -v ./scap:/scap ocp-power.xyz/compliance/openscap-wrapper:latest oscap info --profile xccdf_org.ssgproject.content_profile_cis-node /scap/ssg-ocp4-ds.xml
    Document type: Source Data Stream
    Imported: 2022-12-02T19:09:36
    
    Stream: scap_org.open-scap_datastream_from_xccdf_ssg-ocp4-xccdf.xml
    Generated: (null)
    Version: 1.3
    Profile
            Title: CIS Red Hat OpenShift Container Platform 4 Benchmark
            Id: xccdf_org.ssgproject.content_profile_cis-node
    
            Description: This profile defines a baseline that aligns to the Center for Internet Security® Red Hat OpenShift Container Platform 4 Benchmark™, V1.1.  This profile includes Center for Internet Security® Red Hat OpenShift Container Platform 4 CIS Benchmarks™ content.  Note that this part of the profile is meant to run on the Operating System that Red Hat OpenShift Container Platform 4 runs on top of.  This profile is applicable to OpenShift versions 4.6 and greater.
    
    1. Now, I can run more advanced commands on the profiles on my Mac.
    $ podman run --rm  -v ./scap:/scap ocp-power.xyz/compliance/openscap-wrapper:latest oscap oval generate report /scap/ssg-ocp4-ds.xml 2>&1
    

    References

    1. OpenScap Downloads
    2. OpenScap source code
    3. OpenScap Manual Source
    4. OpenScap Manual Published

    Notes

    Note, I found I had to do the following on my Mac to get the volume to mount.

    $ podman machine stop
    $ podman machine set --rootful
    $ podman machine start
    $ sudo /opt/homebrew/Cellar/podman/4.3.1/bin/podman-mac-helper install
    $ podman machine stop; podman machine start
    
  • Access to Power Systems for Development

    Linda, a colleague on IBM Power Systems development, assembled a nice compendium of resources for developing solutions on IBM Power (ppc64le) architecture. To read more click on the link, and review the details

    Want access to IBM Power Hardware for development efforts? We have compiled a list of cloud, emulation, and on-prem options for you to choose from. Click the link to access all the tools you need to get started. 

    IBM #PowerSystems #IBMCloud #OpenSourceSoftware #IT Infrastructure #PDeX

    https://community.ibm.com/community/user/powerdeveloper/blogs/linda-alkire-kinnunen/2022/08/08/accelerate-your-open-source-development-with-acces 

    Note: for most of what I work on QEMU turns out to be sufficient.

  • Using Ghost on OpenShift Container Platform

    To demonstrate a multi-tiered web application, I used ghost, the microblogging platform to deploy the application using kustomize. Kustomize is a higher-level orchestration of the steps to deploy an application with environment specific overlays.

    Steps

    1. Clone the repository
    git clone https://github.com/prb112/openshift-demo.git
    
    1. Install kustomize
    $ brew install kustomize
    
    1. Login to your cluster using oc.

    2. Generate a randomized password

    $ ENV_PASS=$(openssl rand -hex 10)
    $ echo ${ENV_PASS}
    

    Note, save the output…

    1. Generate the working url for the cluster/ghost app.
    $ export WEB_DOMAIN=https://web-route-ghost.apps.$(oc get ingress.config.openshift.io cluster -o yaml | grep domain | awk '{print $NF}')
    $ echo ${WEB_DOMAIN}
    
    1. Change to the ghost/deploy directory using cd openshift-demo/ghost/deploy

    2. Create the secret for the database

    $ cat secrets/01_db_secret.yml | sed "s|ENV_PASS|${ENV_PASS}|" | oc apply -f -
    
    1. Create the configmap for the Ghost app URL.
    $ cat secrets/02_web_cm.yml | sed "s|WEB_DOMAIN|${WEB_DOMAIN}|" | oc apply -f -
    
    1. Create the deployment for the website
    $ oc apply -k overlays/dev
    namespace/ghost configured
    service/db-service unchanged
    service/web unchanged
    persistentvolumeclaim/db-pvc unchanged
    persistentvolumeclaim/web-content unchanged
    deployment.apps/ghost-db unchanged
    deployment.apps/web unchanged
    route.route.openshift.io/web-route unchanged
    
    1. To clean it up you can run…
    $ oc delete -k overlays/dev
    namespace "ghost" deleted
    service "db-service" deleted
    service "web" deleted
    persistentvolumeclaim "db-pvc" deleted
    persistentvolumeclaim "web-content" deleted
    deployment.apps "ghost-db" deleted
    deployment.apps "web" deleted
    route.route.openshift.io "web-route" deleted
    
    1. To see your website URL, you can grab the config map.
    $ oc get cm -o yaml
    
    1. Navigate to the URL, such as https://web-route-ghost.apps.xyz.zzz.zyz.com/ghost/ to start setting up your site.

    Note, if I had time, I would have generated a non-privileged user for MySQL and used that on the MySQL instance.

    References

    1. https://elixm.com/how-to-deploy-ghost-blog-with-kubernetes/
    2. https://hub.docker.com/_/ghost
    3. https://hub.docker.com/_/mysql
    4. https://github.com/openshift-cs/ghost-example/blob/master/ghost_template.yaml
  • Support for detecting nx-gzip coprocessor feature in Node Feature Discovery

    The Kubernetes add-on Node Feature Discovery is enhanced with a new coprocessor feature and support for detecting the NX-GZIP on Power 10. This work supports the use of libnxz/power-gzip feature.

    We setup Kubernetes 1.25 on a Power10 RHEL 9.1 PowerVM. We built the feature and submitted the PR on behalf of IBM. You’ll need RHEL 9.1 as an operating system on Power 10.

    When the Worker or Control Plane node has Node Feature Discovery enabled on a Power 10 PowerVM with Red Hat Enterprise Linux 9.1 or higher, the label coprocessor.nx_gzip is present on the node. You can see more details in the PR 956

  • Downloading oc-compliance on ppc64le

    My team is working with the OpenShift Container Platforms Optional Operator – Compliance Operator. The Compliance Operator has a supporting tool oc-compliance.

    One tricky element was downloading the oc-compliance plugin and I’ve documented the steps here to help

    Steps

    1. Navigate to https://console.redhat.com/openshift/downloads#tool-pull-secret

    If Prompted, Login with your Red Hat Network id.

    1. Under Tokens, select Pull secret, then click Download

    2. Copy the pull-secret to your working directory

    3. Make the .local/bin directory to drop the plugin.

    $ mkdir -p ~/.local/bin
    
    1. Run the oc-compliance-rhel8 container image.
    $ podman run --authfile pull-secret --rm -v ~/.local/bin:/mnt/out:Z --arch ppc64le registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/
    Trying to pull registry.redhat.io/compliance/oc-compliance-rhel8:stable...
    Getting image source signatures
    Checking if image destination supports signatures
    Copying blob 847f634e7f1e done  
    Copying blob 7643f185b5d8 done  
    Copying blob d6050ae37df3 done  
    Copying config 2f0afdf522 done  
    Writing manifest to image destination
    Storing signatures
    
    1. Check the file is ppc64le
    $ file ~/.local/bin/oc-compliance 
    /root/.local/bin/oc-compliance: ELF 64-bit LSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), dynamically linked, interpreter /lib64/ld64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=d5bff511ee48b6cbc6afce6420e780da2f0eacdc, not stripped
    

    If it doesn’t work, you can always verify your architecture of the machine podman is running on:

    $ arch
    ppc64le
    

    It should say ppc64le.

    You’ve seen how to download the ppc64le build.

    References

  • OpenShift on Power Blogs…

    Recently, I started a leadership position on a new squad focused on OpenShift on IBM Power Systems. Two of my teammates have posted blogs about their work:

    1. Configuring Seccomp Profile on OpenShift Container Platform for Security and Compliance on Power from Aditi covers the ins and outs of configuring the seccomp profile, and tells you why you should care and how you can configure it with your workload.
    2. Encrypting etcd data on Power from Gaurav covers encrypting the etcd data store on OpenShift and how to go through some common operations related to etcd management when it’s encrypted.
    3. Encrypting OpenShift Container Platform Disks on Power Systems from Gaurav covers encryption concepts, how to setup an external tang cluster on IBM PowerVS, how to setup a cluster on IBM PowerVS and how to confirm the encrypted disk setup.
    4. OpenShift TLS Security Profiles on IBM Power from Gaurav covers the setting up of TLS inside OpenShift and verifying the settings.
    5. Lessons Learned using Security Context Constraints OpenShift from Aditi covers key things she learned from using Security Context Constraints
    6. Securing NFS Attached Storage Notes from Aditi covers restricting the use of NFS mounts/securing the attached storage.
    7. Using the Compliance Operator to support PCI-DSS on OpenShift Container Platform on Power from Aditi dives into the PCI-DSS profile with the Compliance Operator.
    8. Configuring a PCI-DSS compliant OpenShift Container Platform cluster on IBM Power from Gaurav dives into configuring a compliance cluster with recipes to enable proper configuration.

    I hope you found these as useful as I did. Best wishes, PB

  • Tweak for GoLang PowerPC Build

    As many know, Go is a designed to build architecture and operating system specific binaries. These architecture and operating system specific binaries are called a target. One can target GOARCH=ppc64le GOOS=linux go build to build for the specific OS. There is a nice little tweak which considers the architectures version and optimizes the selection of the ASM (assembler code) uses when building the code.

    To use the Power Architecture ppc64le for a specific target, you can use GOPPC64:

    1. power10 – runs with Power 10 only.
    2. power9 – runs with Power 9 and Power 10.
    3. power8 (the default) and runs with 8,9,10.

    For example the command is GOARCH=ppc64le GOOS=linux GOPPC64=power9 go build

    This may help with some various results.

    References

  • Using Go Memory and Processor Limits with Kubernetes DownwardAPI

    As many know, Go is a designed for performance with an emphasis on memory management and garbage collection. When used within cgroups with Kubernetes and Red Hat OpenShift Go maximizes for the available memory on the node and the available processors. This approach, as noted by Uber’s automaxprocs, a shared system can see slightly degraded performance when allocated CPUs are not limited to the actually available CPUs (e.g., a prescribed limit).

    Using environment variables, Go lets a user control Memory limits and processor limits.

    GOMEMLIMIT limits the Go heap and all other runtime memory runtime/debug.SetMemoryLimit

    GOMAXPROCS limits the number of operating system threads that can execute user-level Go code simultaneously.

    There is an opensource go packages to control GOMAXPROCS automatically when used with cgroups called automaxproces.

    In OpenShift/Kubernetes, there is a concept of spec.containers[].resources.limits for cpus and memory, as described in the article Resource Management for Pods and Containers.

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
      - name: my-container
        image: myimage
        resources:
          limits:
            memory: "128Mi"
            cpu: 2
    

    To facilitate sharing these details to a container Kubernetes provides the downwardAPI. The downwardAPI provides the details as an environment variableor a file.

    To see how this works in combination:

    1. Create a yaml test.yaml with resources.limits and env.valueFrom.fieldRef.fieldPath set to the GOMEMLIMIT and GOMAXPROCS value you want.
    kind: Namespace
    apiVersion: v1
    metadata:
      name: demo
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: dapi-go-limits
      namespace: demo
    spec:
      containers:
        - name: test-container
          image: registry.access.redhat.com/ubi8/pause
          resources:
            limits:
              memory: 128Mi
              cpu: "2"
          command:
            - sh
            - '-c'
          args:
            - >-
              while true; do echo -en '\n'; printenv GOMEMLIMIT; printenv GOMAXPROCS
              sleep 10; done;
          env:
            - name: GOMEMLIMIT
              valueFrom:
                resourceFieldRef:
                  containerName: test-container
                  resource: limits.memory
            - name: GOMAXPROCS
              valueFrom:
                resourceFieldRef:
                  containerName: test-container
                  resource: limits.cpu
      restartPolicy: Never
    
    1. Apply the file to the oc apply -f test.yaml

    2. Check the logs file

    $ oc -n demo logs pod/dapi-go-limits
    134217728
    2
    
    1. Delete the pod when you are done with the demonstration
    $ oc -n demo delete pod/dapi-go-limits
    pod "dapi-go-limits" deleted
    

    There is a clear / easy way to control go runtime configuration.

    Reference

    • https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
    • https://stackoverflow.com/questions/17853831/what-is-the-gomaxprocs-default-value
    • https://github.com/uber-go/automaxprocs#performance
    • https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
  • Linking Quay to OpenShift and you hit `x509: certificate signed by unknown authority`

    If you see the following error when you link OpenShift and self-signed Quay registry… I’ve got the steps for you…

    Events:
      Type     Reason          Age                From               Message
      ----     ------          ----               ----               -------
      Normal   Scheduled       38s                default-scheduler  Successfully assigned openshift-marketplace/my-operator-catalog-29vl8 to worker.output.xyz
      Normal   AddedInterface  36s                multus             Add eth0 [10.131.1.5/23] from openshift-sdn
      Normal   Pulling         23s (x2 over 36s)  kubelet            Pulling image "quay-demo.host.xyz:8443/repository/ocp/openshift4_12_ppc64le"
      Warning  Failed          22s (x2 over 35s)  kubelet            Failed to pull image "quay-demo.host.xyz:8443/repository/ocp/openshift4_12_ppc64le": rpc error: code = Unknown desc = pinging container registry quay-demo.host.xyz:8443: Get "https://quay-demo.host.xyz:8443/v2/": x509: certificate signed by unknown authority
      Warning  Failed          22s (x2 over 35s)  kubelet            Error: ErrImagePull
      Normal   BackOff         8s (x2 over 35s)   kubelet            Back-off pulling image "quay-demo.host.xyz:8443/repository/ocp/openshift4_12_ppc64le"
      Warning  Failed          8s (x2 over 35s)   kubelet            Error: ImagePullBackOff
    

    Steps

    1. Set the hostname to your registry hostname
    export REGISTRY_HOSTNAME=quay-demo.host.xyz
    export REGISTRY_PORT=8443
    
    1. Extract all the ca certs
    echo "" | openssl s_client -showcerts -prexit -connect "${REGISTRY_HOSTNAME}:${REGISTRY_PORT}" 2> /dev/null | sed -n -e '/BEGIN CERTIFICATE/,/END CERTIFICATE/ p' > tmp.crt
    
    1. Display the cert to verify you see the Issuer
    # openssl x509 -in tmp.crt -text | grep Issuer
            Issuer: C = US, ST = VA, L = New York, O = Quay, OU = Division, CN = quay-demo.host.xyz
    
    1. Create the configmap in the openshift-config namespace
    # oc create configmap registry-quay -n openshift-config --from-file="${REGISTRY_HOSTNAME}..${REGISTRY_PORT}=$(pwd)/tmp.crt"
    configmap/registry-quay created
    
    1. Add anadditionalTrustedCA to the the cluster image config.
    # oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-quay"}}}' --type=merge
    image.config.openshift.io/cluster patched
    
    1. Verify you config is updated
    # oc get image.config.openshift.io/cluster -o yaml
    apiVersion: config.openshift.io/v1
    kind: Image
    metadata:
      annotations:
        include.release.openshift.io/ibm-cloud-managed: "true"
        include.release.openshift.io/self-managed-high-availability: "true"
        include.release.openshift.io/single-node-developer: "true"
        release.openshift.io/create-only: "true"
      creationTimestamp: "2022-10-20T15:35:08Z"
      generation: 2
      name: cluster
      ownerReferences:
      - apiVersion: config.openshift.io/v1
        kind: ClusterVersion
        name: version
        uid: a3df97ca-73ff-4a72-93b1-f3ef7d51e329
      resourceVersion: "6299552"
      uid: f7e56517-486d-4530-8e14-16ef0deed462
    spec:
      additionalTrustedCA:
        name: registry-quay
    status:
      internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
    
    1. Check your pod that failed to connect, and you should see that it now succeeds.

    Reference

  • Use Qemu to Build S390x images

    Tips to build Qemu S390x images

    1. Connect to a build machine

    ssh root@ip

    1. Clone the operator

    “git clone https://github.com/prb112/operator.git“`

    1. Install qemu and buildah and podman-docker

    yum install -y qemu-kvm buildah podman-docker

    /usr/bin/docker run --rm --privileged tonistiigi/binfmt:latest --install all
    Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
    ✔ docker.io/tonistiigi/binfmt:latest
    Trying to pull docker.io/tonistiigi/binfmt:latest...
    Getting image source signatures
    Copying blob e9c608ddc3cb done  
    Copying blob 8d4d64c318a5 done  
    Copying config 354472a378 done  
    Writing manifest to image destination
    Storing signatures
    installing: arm64 OK
    installing: arm OK
    installing: ppc64le OK
    installing: mips64 OK
    installing: riscv64 OK
    installing: mips64le OK
    installing: s390x OK
    {
      "supported": [
        "linux/amd64",
        "linux/arm64",
        "linux/riscv64",
        "linux/ppc64le",
        "linux/s390x",
        "linux/386",
        "linux/mips64le",
        "linux/mips64",
        "linux/arm/v7",
        "linux/arm/v6"
      ],
      "emulators": [
        "kshcomp",
        "qemu-aarch64",
        "qemu-arm",
        "qemu-mips64",
        "qemu-mips64el",
        "qemu-ppc64le",
        "qemu-riscv64",
        "qemu-s390x"
      ]
    }
    

    /usr/bin/buildah bud --arch s390x -f $(pwd)/build/Dockerfile --format docker --tls-verify=true -t op:v0.1.1-linux-s390x $(pwd)/