My teammate and I setup OpenShift: Node Feature Discovery. We wanted to load labels for specific use-cases. I stumbled across Node Feature Discovery: Advanced Configuration, so I edited the NodeFeatureDiscovery resource – nfd-instance
.
$ oc -n openshift-nfd edit NodeFeatureDiscovery nfd-instance
nodefeaturediscovery.nfd.openshift.io/nfd-instance edited
I added to spec.customConfig.klog.addDirHeader
and spec.customConfig.klog.v
so we can get some nice logging. You’ll want to use the number 3
spec:
customConfig:
configData: |
klog:
addDirHeader: false
v: 3
No restart necessary until you start editing the configData. Once you do change the configuration, you’ll see (using oc -n openshift-nfd logs nfd-worker-4v2pw):
I0623 14:19:58.522028 1 memory.go:134] No NVDIMM devices present
I0623 14:39:47.570487 1 memory.go:99] discovered memory features:
I0623 14:39:47.570781 1 memory.go:99] Instances:
I0623 14:39:47.570791 1 memory.go:99] nv:
I0623 14:39:47.570798 1 memory.go:99] Elements: []
I0623 14:39:47.570805 1 memory.go:99] Keys: {}
I0623 14:39:47.570812 1 memory.go:99] Values:
I0623 14:39:47.570819 1 memory.go:99] numa:
I0623 14:39:47.570826 1 memory.go:99] Elements:
I0623 14:39:47.570833 1 memory.go:99] is_numa: "false"
I0623 14:39:47.570840 1 memory.go:99] node_count: "1"
I0623 14:19:58.522052 1 nfd-worker.go:472] starting feature discovery...
I0623 14:19:58.522062 1 nfd-worker.go:484] feature discovery completed
I0623 14:19:58.522079 1 nfd-worker.go:485] labels discovered by feature sources:
I0623 14:19:58.522141 1 nfd-worker.go:485] {}
I0623 14:19:58.522155 1 nfd-worker.go:565] sending labeling request to nfd-master
You can check your labels at:
$ oc get node -o json | jq -r '.items[].metadata.labels'
...
{
"beta.kubernetes.io/arch": "ppc64le",
"beta.kubernetes.io/os": "linux",
"cpumanager": "enabled",
"kubernetes.io/arch": "ppc64le",
"kubernetes.io/hostname": "worker-1.xip.io",
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/worker": "",
"node.openshift.io/os_id": "rhcos"
}
I then update the configData with labelSources
and featureSources
so it is using all,-memory
, without the memory source. You can see the list of sources at GitHub: node-feature-discovery/source and Feature Discovery: sources
spec:
workerConfig:
configData: |
core:
labelSources: [-memory,all]
featureSources: [-memory,all]
Note: There is an additional configuration example at link.
Check your labels and see your labels are there.
$ oc get node -o json | jq -r '.items[].metadata.labels'
{
"beta.kubernetes.io/arch": "ppc64le",
"beta.kubernetes.io/os": "linux",
"cpumanager": "enabled",
"feature.node.kubernetes.io/cpu-cpuid.ALTIVEC": "true",
"feature.node.kubernetes.io/cpu-cpuid.ARCHPMU": "true",
"feature.node.kubernetes.io/cpu-cpuid.ARCH_2_06": "true",
"feature.node.kubernetes.io/cpu-cpuid.ARCH_2_07": "true",
"feature.node.kubernetes.io/cpu-cpuid.ARCH_3_00": "true",
"feature.node.kubernetes.io/cpu-cpuid.DARN": "true",
"feature.node.kubernetes.io/cpu-cpuid.DFP": "true",
"feature.node.kubernetes.io/cpu-cpuid.DSCR": "true",
"feature.node.kubernetes.io/cpu-cpuid.EBB": "true",
"feature.node.kubernetes.io/cpu-cpuid.FPU": "true",
"feature.node.kubernetes.io/cpu-cpuid.HTM": "true",
"feature.node.kubernetes.io/cpu-cpuid.HTM-NOSC": "true",
"feature.node.kubernetes.io/cpu-cpuid.IC_SNOOP": "true",
"feature.node.kubernetes.io/cpu-cpuid.IEEE128": "true",
"feature.node.kubernetes.io/cpu-cpuid.ISEL": "true",
"feature.node.kubernetes.io/cpu-cpuid.MMU": "true",
"feature.node.kubernetes.io/cpu-cpuid.PPC32": "true",
"feature.node.kubernetes.io/cpu-cpuid.PPC64": "true",
"feature.node.kubernetes.io/cpu-cpuid.SMT": "true",
"feature.node.kubernetes.io/cpu-cpuid.TAR": "true",
"feature.node.kubernetes.io/cpu-cpuid.TRUE_LE": "true",
"feature.node.kubernetes.io/cpu-cpuid.VCRYPTO": "true",
"feature.node.kubernetes.io/cpu-cpuid.VSX": "true",
"feature.node.kubernetes.io/cpu-hardware_multithreading": "true",
"feature.node.kubernetes.io/kernel-config.NO_HZ": "true",
"feature.node.kubernetes.io/kernel-config.NO_HZ_FULL": "true",
"feature.node.kubernetes.io/kernel-selinux.enabled": "true",
"feature.node.kubernetes.io/kernel-version.full": "4.18.0-305.45.1.el8_4.ppc64le",
"feature.node.kubernetes.io/kernel-version.major": "4",
"feature.node.kubernetes.io/kernel-version.minor": "18",
"feature.node.kubernetes.io/kernel-version.revision": "0",
"feature.node.kubernetes.io/system-os_release.ID": "rhcos",
"feature.node.kubernetes.io/system-os_release.OPENSHIFT_VERSION": "4.10",
"feature.node.kubernetes.io/system-os_release.OSTREE_VERSION": "410.84.202205120749-0",
"feature.node.kubernetes.io/system-os_release.RHEL_VERSION": "8.4",
"feature.node.kubernetes.io/system-os_release.VERSION_ID": "4.10",
"feature.node.kubernetes.io/system-os_release.VERSION_ID.major": "4",
"feature.node.kubernetes.io/system-os_release.VERSION_ID.minor": "10",
"kubernetes.io/arch": "ppc64le",
"kubernetes.io/hostname": "worker1.xip.io",
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/worker": "",
"node.openshift.io/os_id": "rhcos"
}