Author: Paul

  • IBM FHIR Server: Getting Started Links

    The IBM FHIR Server supports FHIR R4 and is at version 4.10.2.

    The conformance page outlines the features and capabilities of the main build link. The user’s guide outlines the features and configurations link. The module catalog outlines the features link.

    The IBM FHIR Server is delivered in three ways.

    1. A Modular Application
    • Jars published to Maven Central
    • Search for Jars at https://mvnrepository.com/artifact/com.ibm.fhir
    • Javadocs
    1. Docker
    1. A Helm Chart to Install a Working Environment

    If you have more specifics, we can work on diving into the features.

  • USING THE IBM FHIR SERVER WITH HELM

    This article walks folks through the process of using the IBM FHIR Server’s helm chart with Docker Desktop Kubernetes and getting it up and running.

  • Help… run nsenter

    Per Enqueue Zero, Nsenter is a utility enters the namespaces of one or more other processes and then executes the specified program. In other words, we jump to the inner side of the namespace.

    Search for the namespace, by searching for S+, and then using the PID to target the namespace, and run the local tools in the namespace. This is very helpful where the docker container does not contain the necessary tools by default.

    [root@localhost ~]# ps aux | grep 'S+'
    gdm         1203  0.0  0.0   6084  1064 tty1     S+    2021   0:00 dbus-run-session -- gnome-session --autostart /usr/share/gdm/greeter/autostart
    root       24439  0.0  0.0   4420   632 pts/0    S+    2021   0:00 tail -f /database/config/db2inst1/sqllib/db2dump/DIAG0000/db2diag.log
    root      922523  0.0  0.0 221568   776 pts/0    S+   14:07   0:00 grep --color=auto S+
    [root@localhost ~]# nsenter --target 24439 --mount --uts  --net --pid ps aux
    USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
    root           1  0.0  0.0  12100  2416 pts/0    Ss+   2021   0:01 /bin/bash /var/db2_setup/lib/setup_db2_instance.sh
    root       14738  0.0  0.0 111996  5688 ?        Ss    2021  33:46 /usr/bin/python /usr/bin/supervisord -c /etc/supervisord.conf
    root       14741  0.0  0.0   4420   632 pts/0    S+    2021   0:00 tail -f /database/config/db2inst1/sqllib/db2dump/DIAG0000/db2diag.log
    root       14742  0.0  0.0  95724 12332 ?        S     2021   5:59 /opt/ibm/db2/V11.5/bin/db2fmcd
    root       14743  0.0  0.0 112952  5696 ?        S     2021   0:00 /usr/sbin/sshd -D
    root       65263  0.0  0.0 1313568 8708 ?        Sl    2021   0:06 db2wdog 0 [db2inst1]
    db2inst1   65265  0.1  0.1 3716884 30216 ?       Sl    2021  88:11 db2sysc 0
    root       65271  0.0  0.0 1316348 3548 ?        S     2021   0:00 db2ckpwd 0
    root       65272  0.0  0.0 1316348 3548 ?        S     2021   0:00 db2ckpwd 0
    root       65273  0.0  0.0 1316348 3544 ?        S     2021   0:00 db2ckpwd 0
    db2inst1   65275  0.0  0.0 723620  7448 ?        S     2021   0:00 db2vend (PD Vendor Process - 1) 0
    db2inst1   65283  0.0  0.1 961132 23452 ?        Sl    2021  16:42 db2acd 0 ,0,0,0,1,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000
    db2fenc1   66416  0.0  0.1 650952 20416 ?        Sl    2021   0:01 db2fmp ( ,1,0,0,0,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000
    db2fenc1   70651  0.0  0.1 424932 19852 ?        Sl    2021   0:00 db2fmp ( ,0,0,0,0,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000
    db2fenc1 2274960  0.0  0.1 424932 19872 ?        Sl    2021   0:00 db2fmp ( ,0,0,0,0,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000
    root     4113980  0.0  0.0  11704  2660 ?        S    19:05   0:00 bash -c /var/db2_setup/lib/backup_cfg.sh >> /tmp/backup_cfg.out 2>&1
    root     4113981  0.0  0.0  11704  2816 ?        S    19:05   0:00 /bin/bash /var/db2_setup/lib/backup_cfg.sh
    root     4114110  0.0  0.0   4380   736 ?        S    19:05   0:00 sleep 2m
    root     4114269  0.1  0.0  12100  3072 ?        S    19:07   0:00 /bin/bash /var/db2_setup/lib/fix_etc_host.sh
    root     4114285  0.0  0.0   4380   716 ?        S    19:07   0:00 sleep 10
    root     4114290  0.0  0.0  53348  3856 pts/0    R+   19:07   0:00 ps aux
    [root@localhost ~]#
    
  • Never accept the defaults: Lessons Learned using OpenJ9 in a Container

    Never accept the defaults: Lessons Learned using OpenJ9 in a Container

    Eclipse OpenJ9 is an efficient virtual machine with a small-dynamic footprint that is used for many cloud applications.  Many applications use the OpenJ9 to run their applications, such as the Apache OpenWhisk, IBM FHIR Server and Open Liberty.

    I learned a few things about running Java applications with the OpenJ9 VM in Docker:

    1. Eclipse OpenJ9 knows about modern applications
    2. Tweak Your Settings
    3. Review your Settings

    1.   Eclipse OpenJ9 knows about modern applications

    The Eclipse OpenJ9 team smartly realized many Java applications are in a container or namespace or virtual machine which has a runtime determined memory allocation.  For instance, Kubernetes may scale the memory available to a Java Container from 4G to 8G. As such, the OpenJ9 VM automatically adjust the minimum and maximum heap sizes based on the available memory in the container.

    There are plenty of nobs to tweak. Further, the Eclipse team has some compelling research on performance and places to look to tweak my application.

    2.   Tweak your settings

    I scanned through the JVM -XX Options and found -XX:InitialRAMPercentage / -XX:MaxRAMPercentage.  These options combined with -XX:[+|-]UseContainerSupport

    , the default, enable the VM to allocate a fraction of the available memory to the JVM.  I tried this with a Docker container assigned 2GB Memory.

    -XX:InitialRAMPercentage=50.00

    -XX:MaxRAMPercentage=90.00

    3.   Review and Test Your Settings

    I turned on the -verbose:gc setting, and started my VM and ran some load.

    I waited for a steady state and checked my available memory, and saw the 469M free of 858M total.

    <gc-end id=”443″ type=”scavenge” contextid=”439″ durationms=”57.408″ usertimems=”223.188″ systemtimems=”0.172″ stalltimems=”4.439″ timestamp=”2021-12-20T18:00:43.360″ activeThreads=”4″>

    <mem-info id=”444″ free=”469525144″ total=”858914816″ percent=”54″>

    <mem type=”nursery” free=”269655128″ total=”468451328″ percent=”57″>

    <mem type=”allocate” free=”269655128″ total=”349700096″ percent=”77″ />

    <mem type=”survivor” free=”0″ total=”118751232″ percent=”0″ /> </mem>

     <mem type=”tenure” free=”199870016″ total=”390463488″ percent=”51″ macro-fragmented=”99381968″>

    <mem type=”soa” free=”180346432″ total=”370939904″ percent=”48″ />

     <mem type=”loa” free=”19523584″ total=”19523584″ percent=”100″ /> </mem>

    I tweaked the memory based on the Garbage Collection log and used the tools as mentioned in the OpenJ9 docs. You can also see some more of the enhancements for Containers that OpenJ9 has added.

    Go forth, tweak the settings for the container and best wishes. Please comment if I can help.

  • Upper Limit for PreparedStatement Parameters

    Thanks to Lup Peng PostgreSQL JDBC Driver – Upper Limit on Parameters in PreparedStatement I was able to diagnose an upper limit:

    Caused by: java.io.IOException: Tried to send an out-of-range integer as a 2-byte value: 54838
    	at org.postgresql.core.PGStream.sendInteger2(PGStream.java:349)
    	at org.postgresql.core.v3.QueryExecutorImpl.sendParse(QueryExecutorImpl.java:1546)
    	at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:1871)
    	at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1432)
    	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:314)
    	... 96 more
    

    I had 54K parameters on my query. It turns out due to public void sendInteger2(int val) throws IOException PGStream.java has a maximum number of Short.MAX_VALUE – 32767

    Net for others hitting the same issue in different RDBMS systems:

    1. Postgres – 32767 Parameters
    2. IBM Db2 Limit – Maximum number of host variable references in a dynamic SQL statement -> 32,767 32,767 Parameters and 2,097,152 length of the text in the generated sql. Limits
    3. Derby – Storage capacity is the limit https://db.apache.org/derby/docs/10.14/ref/refderby.pdf
  • Bulk Data Configurations for IBM FHIR Server’s Storage Providers

    As of IBM FHIR Server 4.10.2

    A colleague of mine is entering into the depths of the IBM FHIR Server’s Bulk Data feature. Each tenant in the IBM FHIR Server may specify multiple storageProviders. The default tenant is assumed, unless specified with the Http Headers X-FHIR-BULKDATA-PROVIDER and X-FHIR-BULKDATA-PROVIDER-OUTCOME. Each tenant’s configuration may mix the different providers, however each provider is only of a single type. For instance, minio is aws-s3 and default is file and az is azure-blob.

    Note, type http is only applicable to $import operations. Export is only supported with s3, azure-blob and file.

    File Storage Provider Configuration

    The file storage provider uses a directory local to the IBM FHIR Server. The fileBase is an absolute path that must exist. Each import inputUrl is a relative path from the fileBase. The File Provider is available for both import and export. Authentication is not supported.

    {
        "__comment": "IBM FHIR Server BulkData - File Storage configuration",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "file",
                        "fileBase": "/opt/wlp/usr/servers/fhir-server/output",
                        "disableOperationOutcomes": true,
                        "duplicationCheck": false,
                        "validateResources": false,
                        "create": false
                    }
                }
            }
        }
    }
    

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "file"
            }
        ]
    }
    

    HTTPS Storage Provider Configuration

    The http storage provider uses a set of validBaseUrls to confirm $import inputUrl is acceptable. The Https Provider is available for import. Authentication is not supported.

    {
        "__comment": "IBM FHIR Server BulkData - Https Storage configuration (Import only)",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "https",
                        "__comment": "The whitelist of valid base urls, you can always disable",
                        "validBaseUrls": [],
                        "disableBaseUrlValidation": true,
                        "__comment": "You can always direct to another provider",
                        "disableOperationOutcomes": true,
                        "duplicationCheck": false,
                        "validateResources": false
                    }
                }
            }
        }
    }
    

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "https://validbaseurl.com/r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "https"
            }
        ]
    }
    

    Azure Storage Provider Configuration

    The azure-blob storage provider uses a connection string from the Azure Blob configuration. The bucketName is the blob storage name. The azure-blob provider supports import and export. Authentication and configuration are built into the Connection string.

        "__comment": "IBM FHIR Server BulkData - Azure Blob Storage configuration",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "azure-blob",
                        "bucketName": "fhirtest",
                        "auth": {
                            "type": "connection",
                            "connection": "DefaultEndpointsProtocol=https;AccountName=fhirdt;AccountKey=ABCDEF==;EndpointSuffix=core.windows.net"
                        },
                        "disableBaseUrlValidation": true,
                        "disableOperationOutcomes": true
                    }
                }
            }
        }
    }```

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "azure-blob"
            }
        ]
    }
    

    S3 Storage Provider Configuration

    The aws-s3 storage provider supports import and export. The bucketName, location, auth style (hmac, iam), endpointInternal, endpointExternal are separate values in the configuration. Note, enableParquet is obsolete.

    {
        "__comment": "IBM FHIR Server BulkData - AWS/COS/Minio S3 Storage configuration",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "aws-s3",
                        "bucketName": "myfhirbucket",
                        "location": "us-east-2",
                        "endpointInternal": "https://s3.us-east-2.amazonaws.com",
                        "endpointExternal": "https://myfhirbucket.s3.us-east-2.amazonaws.com",
                        "auth": {
                            "type": "hmac",
                            "accessKeyId": "AKIAAAAF2TOAAATMAAAO",
                            "secretAccessKey": "mmmUVsqKzAAAAM0QDSxH9IiaGQAAA"
                        },
                        "enableParquet": false,
                        "disableBaseUrlValidation": true,
                        "exportPublic": false,
                        "disableOperationOutcomes": true,
                        "duplicationCheck": false,
                        "validateResources": false,
                        "create": false,
                        "presigned": true,
                        "accessType": "host"
                    }
                }
            }
        }
    }
    

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "aws-s3"
            }
        ]
    }

    Note, you can exchange aws-s3 and ibm-cos as the parameter.where(name=’storageDetail’). These are treated interchangeably.

    There are lots of configurations that are possible, I hope this helps you.

  • Not Yet Another Docker to Rancher Desktop Alternative

    With the change to Docker, Docker is changing its license going forward with Docker Desktop as noted in their license and blog. Much like a former colleague of mine’s article YADPBP: Yet Another Docker to Podman Blog Post, I have entered into the Docker Desktop migration.

    I’ve tried minikube, microk8s, podman, Lima-vm and Rancher Desktop. Many of these solutions run a single container, such as multipass. In fact, I tried using Multipass with Podman installed inside of the multipass vm. I found the networking and forwarding needs while testing multiple containers to a local dev environment was a pain. I spent a few days working with minikube, microk9s, podman and ended up on Racher Desktop.

    Rancher Desktop has flavors for Mac and Linux (I don’t run Windows as a base OS anymore). I downloaded one of the tech preview releases from GitHub and installed. It’s fairly simple, and they have a straight-forward readme. One trick, be sure to install / setup nerdctl.

    select nerdctl

    nerdctl is a Docker-compatible commandline replacement and integrate seamlessly with Rancher Desktop.

    ~/$ nerdctl run -p 9443:9443 --name fhir -e BOOTSTRAP_DB=true ibmcom/ibm-fhir-server
    docker.io/ibmcom/ibm-fhir-server:latest:                                          resolved       |++++++++++++++++++++++++++++++++++++++| 
    manifest-sha256:41f6894fa546899e02e4a8d2370bb6910eb72ed77ec58ae06c3de5e12f3ebb1c: done           |++++++++++++++++++++++++++++++++++++++| 
    config-sha256:3c912cc1a5b7c69ae15c9b969ae0085839b926e825b6555a28518458f4bd4935:   done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:06038631a24a25348b51d1bfc7d0a0ee555552a8998f8328f9b657d02dd4c64c:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:661abc6f8cb3c6d78932032ce87eb294f43f6eca9daa7681816d83ee0f62fb3d:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:e74a68c65fb24cc6fabe5f925d450cae385b2605d8837d5d7500bdd5bad7f268:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:262268b65bd5f33784d6a61514964887bc18bc00c60c588bc62bfae7edca46f1:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:d5e08b0b786452d230adf5d9050ce06b4f4d73f89454a25116927242507b603b:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:50dc68e56d6bac757f0176b8b49cffc234879e221c64a8481805239073638fb4:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:1831e571c997bd295bd5ae59bfafd69ba942bfe9e63f334cfdc35a8c86886d47:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:d29b7147ca6a2263381a0e4f3076a034b223c041d2a8f82755c30a373bb6ade7:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:a2643035bb64ff12bb72e7b47b1d88e0cdbc3846b5577a9ee9c44baf7c707b20:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:3ba05464ea94778cacf3f55c7b11d7d41293c1fc169e9e290b48e2928eaad779:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:6fb3372b06eb12842f94f14039c1d84608cbed52f56d3862f2c545d65e784a00:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:4cf8515f0f05c79594b976e803ea54e62fcaee1f6e5cfadb354ab687b758ed55:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:4debf1aa73b3e81393dc46e2f3c9334f6400e5b0160beb00196d0e5803af1e63:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:ecaacecff5f80531308a1948790550b421ca642f57b78ea090b286f74f3a7ba1:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:1ccf6767107a3807289170cc0149b6f60b5ed2f52ba3ba9b00b8d320951c4317:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:8144e53119b8ac586492370a117aa83bc31cf439c70663a58894fc1dfe9a4e08:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:16bdcde4e18e3d74352c7e42090514c7f2e0213604c74e5a6bf938647c195546:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:e9726188008a01782dcb61103c7d892f605032386f5ba7ea2acbcb6cf9770a0e:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:c37730e2eaef6bbb446d2ebe5ec230eef4abdb36e6153778d1ae8416f5543e7d:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:35d3a4502906b8e3a4c962902925f8e1932c8fb012fa84e875494049d8a6b324:    done           |++++++++++++++++++++++++++++++++++++++| 
    elapsed: 94.3s                                                                    total:  696.1  (7.4 MiB/s)                                       
    bootstrap.sh - [INFO]: 2021-12-07_19:42:09 - Current directory: /opt/ibm-fhir-server
    bootstrap.sh - [INFO]: 2021-12-07_19:42:09 - Performing Derby database bootstrapping
    2021-12-07 19:42:11.348 00000001    INFO .common.JdbcConnectionProvider Opening connection to database: jdbc:derby:/output/derby/fhirDB;create=true
    2021-12-07 19:42:13.138 00000001 WARNING ls.pool.PoolConnectionProvider Get connection took 1.791 seconds
    2021-12-07 19:42:13.382 00000001    INFO m.fhir.schema.app.LeaseManager Requesting update lease for schema 'APP' [attempt 1]
    

    When you see ready to run a smarter planet, the server is started.

    [12/7/21, 19:45:00:437 UTC] 00000027 FeatureManage A   CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 20.229 seconds.
    

    When running the $healthcheck, you see:

    curl -u fhiruser:change-password https://localhost:9443/fhir-server/api/v4/\$healthcheck -k -H "Prefer: return=OperationOutcome"
    {"resourceType":"OperationOutcome","issue":[{"severity":"information","code":"informational","details":{"text":"All OK"}}]}
    

    Racher Desktop is up… time to run with it…

  • GitHub Actions: Concurrency Control

    My team uses GitHub Actions 18 in total jobs across about 12 workflows. When we get multiple pull requests we end up driving contention on the workflows and resources we use. I ran across concurrency control for the workflows.

    To take advantage of concurrency control add this snippet to the bottom of your pull request workflow:

    concurrency:
      group: audit-${{ github.event.pull_request.number || github.sha }}
      cancel-in-progress: true 
    

    When you stack the commits you end up with this warning, and the prior job is stopped:

    e2e-db2-with-bulkdata (11)
    Canceling since a higher priority waiting request for 'integration-3014' exists

  • Tracing the IBM FHIR Server file access on MacOSX

    If you want to trace the file access of the IBM FHIR Server, you can use fs_usage using sudo.

    1. Find the Server
    PS=$(ps -ef | grep -i fhir-server | grep -v grep | awk '{print $2}')
    sudo fs_usage -w ${PS} | grep -i json
    
    1. Check what files are used
    $ sudo fs_usage -w ${PS} | tee out.log | grep -i json
    12:11:02.328326  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000023   java.2179466
    12:11:02.328342  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000008   java.2179466
    12:11:02.328360  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000008   java.2179466
    12:11:02.328368  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000005   java.2179466
    12:11:02.332330  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000020   java.2179466
    12:11:02.332437  open              F=109      (R___________)  config/default/fhir-server-config.json                                                                                                                          0.000085   java.2179466
    12:11:02.350576  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000016   java.2179466
    

    You can then see what operations are executed on the fhir-server-config.json or any other file the server accesses

    Reference

  • Using Docker and Kafka with IBM FHIR Server Audit for Testing

    Thie attached GIST is a package of Kubernetes yaml files and Java code to test locally with Docker/Kubernetes with the IBM FHIR Server.

    You’ll want to kubectl apply -f <filename> for each of the files.

    Then apply the fhir-server-config-snippet.json to your fhir-server-config.json

    And run

    kubectl config use-context docker-desktop
    kubectl -n fhir-cicd-ns port-forward kafka-0 9092

    Thanks to https://github.com/d1egoaz/minikube-kafka-cluster for the inspiration.