Blog

  • Upper Limit for PreparedStatement Parameters

    Thanks to Lup Peng PostgreSQL JDBC Driver – Upper Limit on Parameters in PreparedStatement I was able to diagnose an upper limit:

    Caused by: java.io.IOException: Tried to send an out-of-range integer as a 2-byte value: 54838
    	at org.postgresql.core.PGStream.sendInteger2(PGStream.java:349)
    	at org.postgresql.core.v3.QueryExecutorImpl.sendParse(QueryExecutorImpl.java:1546)
    	at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:1871)
    	at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1432)
    	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:314)
    	... 96 more
    

    I had 54K parameters on my query. It turns out due to public void sendInteger2(int val) throws IOException PGStream.java has a maximum number of Short.MAX_VALUE – 32767

    Net for others hitting the same issue in different RDBMS systems:

    1. Postgres – 32767 Parameters
    2. IBM Db2 Limit – Maximum number of host variable references in a dynamic SQL statement -> 32,767 32,767 Parameters and 2,097,152 length of the text in the generated sql. Limits
    3. Derby – Storage capacity is the limit https://db.apache.org/derby/docs/10.14/ref/refderby.pdf
  • Bulk Data Configurations for IBM FHIR Server’s Storage Providers

    As of IBM FHIR Server 4.10.2

    A colleague of mine is entering into the depths of the IBM FHIR Server’s Bulk Data feature. Each tenant in the IBM FHIR Server may specify multiple storageProviders. The default tenant is assumed, unless specified with the Http Headers X-FHIR-BULKDATA-PROVIDER and X-FHIR-BULKDATA-PROVIDER-OUTCOME. Each tenant’s configuration may mix the different providers, however each provider is only of a single type. For instance, minio is aws-s3 and default is file and az is azure-blob.

    Note, type http is only applicable to $import operations. Export is only supported with s3, azure-blob and file.

    File Storage Provider Configuration

    The file storage provider uses a directory local to the IBM FHIR Server. The fileBase is an absolute path that must exist. Each import inputUrl is a relative path from the fileBase. The File Provider is available for both import and export. Authentication is not supported.

    {
        "__comment": "IBM FHIR Server BulkData - File Storage configuration",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "file",
                        "fileBase": "/opt/wlp/usr/servers/fhir-server/output",
                        "disableOperationOutcomes": true,
                        "duplicationCheck": false,
                        "validateResources": false,
                        "create": false
                    }
                }
            }
        }
    }
    

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "file"
            }
        ]
    }
    

    HTTPS Storage Provider Configuration

    The http storage provider uses a set of validBaseUrls to confirm $import inputUrl is acceptable. The Https Provider is available for import. Authentication is not supported.

    {
        "__comment": "IBM FHIR Server BulkData - Https Storage configuration (Import only)",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "https",
                        "__comment": "The whitelist of valid base urls, you can always disable",
                        "validBaseUrls": [],
                        "disableBaseUrlValidation": true,
                        "__comment": "You can always direct to another provider",
                        "disableOperationOutcomes": true,
                        "duplicationCheck": false,
                        "validateResources": false
                    }
                }
            }
        }
    }
    

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "https://validbaseurl.com/r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "https"
            }
        ]
    }
    

    Azure Storage Provider Configuration

    The azure-blob storage provider uses a connection string from the Azure Blob configuration. The bucketName is the blob storage name. The azure-blob provider supports import and export. Authentication and configuration are built into the Connection string.

        "__comment": "IBM FHIR Server BulkData - Azure Blob Storage configuration",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "azure-blob",
                        "bucketName": "fhirtest",
                        "auth": {
                            "type": "connection",
                            "connection": "DefaultEndpointsProtocol=https;AccountName=fhirdt;AccountKey=ABCDEF==;EndpointSuffix=core.windows.net"
                        },
                        "disableBaseUrlValidation": true,
                        "disableOperationOutcomes": true
                    }
                }
            }
        }
    }```

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "azure-blob"
            }
        ]
    }
    

    S3 Storage Provider Configuration

    The aws-s3 storage provider supports import and export. The bucketName, location, auth style (hmac, iam), endpointInternal, endpointExternal are separate values in the configuration. Note, enableParquet is obsolete.

    {
        "__comment": "IBM FHIR Server BulkData - AWS/COS/Minio S3 Storage configuration",
        "fhirServer": {
            "bulkdata": {
                "__comment" : "The other bulkdata configuration elements are skipped",
                "storageProviders": {
                    "default": {
                        "type": "aws-s3",
                        "bucketName": "myfhirbucket",
                        "location": "us-east-2",
                        "endpointInternal": "https://s3.us-east-2.amazonaws.com",
                        "endpointExternal": "https://myfhirbucket.s3.us-east-2.amazonaws.com",
                        "auth": {
                            "type": "hmac",
                            "accessKeyId": "AKIAAAAF2TOAAATMAAAO",
                            "secretAccessKey": "mmmUVsqKzAAAAM0QDSxH9IiaGQAAA"
                        },
                        "enableParquet": false,
                        "disableBaseUrlValidation": true,
                        "exportPublic": false,
                        "disableOperationOutcomes": true,
                        "duplicationCheck": false,
                        "validateResources": false,
                        "create": false,
                        "presigned": true,
                        "accessType": "host"
                    }
                }
            }
        }
    }
    

    An example request is:

    {
        "resourceType": "Parameters",
        "id": "30321130-5032-49fb-be54-9b8b82b2445a",
        "parameter": [
            {
                "name": "inputSource",
                "valueUri": "https://my-server/source-fhir-server"
            },
            {
                "name": "inputFormat",
                "valueString": "application/fhir+ndjson"
            },
            {
                "name": "input",
                "part": [
                    {
                        "name": "type",
                        "valueString": "AllergyIntolerance"
                    },
                    {
                        "name": "url",
                        "valueUrl": "r4_AllergyIntolerance.ndjson"
                    }
                ]
            },
            {
                "name": "storageDetail",
                "valueString": "aws-s3"
            }
        ]
    }

    Note, you can exchange aws-s3 and ibm-cos as the parameter.where(name=’storageDetail’). These are treated interchangeably.

    There are lots of configurations that are possible, I hope this helps you.

  • Not Yet Another Docker to Rancher Desktop Alternative

    With the change to Docker, Docker is changing its license going forward with Docker Desktop as noted in their license and blog. Much like a former colleague of mine’s article YADPBP: Yet Another Docker to Podman Blog Post, I have entered into the Docker Desktop migration.

    I’ve tried minikube, microk8s, podman, Lima-vm and Rancher Desktop. Many of these solutions run a single container, such as multipass. In fact, I tried using Multipass with Podman installed inside of the multipass vm. I found the networking and forwarding needs while testing multiple containers to a local dev environment was a pain. I spent a few days working with minikube, microk9s, podman and ended up on Racher Desktop.

    Rancher Desktop has flavors for Mac and Linux (I don’t run Windows as a base OS anymore). I downloaded one of the tech preview releases from GitHub and installed. It’s fairly simple, and they have a straight-forward readme. One trick, be sure to install / setup nerdctl.

    select nerdctl

    nerdctl is a Docker-compatible commandline replacement and integrate seamlessly with Rancher Desktop.

    ~/$ nerdctl run -p 9443:9443 --name fhir -e BOOTSTRAP_DB=true ibmcom/ibm-fhir-server
    docker.io/ibmcom/ibm-fhir-server:latest:                                          resolved       |++++++++++++++++++++++++++++++++++++++| 
    manifest-sha256:41f6894fa546899e02e4a8d2370bb6910eb72ed77ec58ae06c3de5e12f3ebb1c: done           |++++++++++++++++++++++++++++++++++++++| 
    config-sha256:3c912cc1a5b7c69ae15c9b969ae0085839b926e825b6555a28518458f4bd4935:   done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:06038631a24a25348b51d1bfc7d0a0ee555552a8998f8328f9b657d02dd4c64c:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:661abc6f8cb3c6d78932032ce87eb294f43f6eca9daa7681816d83ee0f62fb3d:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:e74a68c65fb24cc6fabe5f925d450cae385b2605d8837d5d7500bdd5bad7f268:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:262268b65bd5f33784d6a61514964887bc18bc00c60c588bc62bfae7edca46f1:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:d5e08b0b786452d230adf5d9050ce06b4f4d73f89454a25116927242507b603b:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:50dc68e56d6bac757f0176b8b49cffc234879e221c64a8481805239073638fb4:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:1831e571c997bd295bd5ae59bfafd69ba942bfe9e63f334cfdc35a8c86886d47:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:d29b7147ca6a2263381a0e4f3076a034b223c041d2a8f82755c30a373bb6ade7:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:a2643035bb64ff12bb72e7b47b1d88e0cdbc3846b5577a9ee9c44baf7c707b20:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:3ba05464ea94778cacf3f55c7b11d7d41293c1fc169e9e290b48e2928eaad779:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:6fb3372b06eb12842f94f14039c1d84608cbed52f56d3862f2c545d65e784a00:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:4cf8515f0f05c79594b976e803ea54e62fcaee1f6e5cfadb354ab687b758ed55:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:4debf1aa73b3e81393dc46e2f3c9334f6400e5b0160beb00196d0e5803af1e63:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:ecaacecff5f80531308a1948790550b421ca642f57b78ea090b286f74f3a7ba1:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:1ccf6767107a3807289170cc0149b6f60b5ed2f52ba3ba9b00b8d320951c4317:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:8144e53119b8ac586492370a117aa83bc31cf439c70663a58894fc1dfe9a4e08:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:16bdcde4e18e3d74352c7e42090514c7f2e0213604c74e5a6bf938647c195546:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:e9726188008a01782dcb61103c7d892f605032386f5ba7ea2acbcb6cf9770a0e:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:c37730e2eaef6bbb446d2ebe5ec230eef4abdb36e6153778d1ae8416f5543e7d:    done           |++++++++++++++++++++++++++++++++++++++| 
    layer-sha256:35d3a4502906b8e3a4c962902925f8e1932c8fb012fa84e875494049d8a6b324:    done           |++++++++++++++++++++++++++++++++++++++| 
    elapsed: 94.3s                                                                    total:  696.1  (7.4 MiB/s)                                       
    bootstrap.sh - [INFO]: 2021-12-07_19:42:09 - Current directory: /opt/ibm-fhir-server
    bootstrap.sh - [INFO]: 2021-12-07_19:42:09 - Performing Derby database bootstrapping
    2021-12-07 19:42:11.348 00000001    INFO .common.JdbcConnectionProvider Opening connection to database: jdbc:derby:/output/derby/fhirDB;create=true
    2021-12-07 19:42:13.138 00000001 WARNING ls.pool.PoolConnectionProvider Get connection took 1.791 seconds
    2021-12-07 19:42:13.382 00000001    INFO m.fhir.schema.app.LeaseManager Requesting update lease for schema 'APP' [attempt 1]
    

    When you see ready to run a smarter planet, the server is started.

    [12/7/21, 19:45:00:437 UTC] 00000027 FeatureManage A   CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 20.229 seconds.
    

    When running the $healthcheck, you see:

    curl -u fhiruser:change-password https://localhost:9443/fhir-server/api/v4/\$healthcheck -k -H "Prefer: return=OperationOutcome"
    {"resourceType":"OperationOutcome","issue":[{"severity":"information","code":"informational","details":{"text":"All OK"}}]}
    

    Racher Desktop is up… time to run with it…

  • GitHub Actions: Concurrency Control

    My team uses GitHub Actions 18 in total jobs across about 12 workflows. When we get multiple pull requests we end up driving contention on the workflows and resources we use. I ran across concurrency control for the workflows.

    To take advantage of concurrency control add this snippet to the bottom of your pull request workflow:

    concurrency:
      group: audit-${{ github.event.pull_request.number || github.sha }}
      cancel-in-progress: true 
    

    When you stack the commits you end up with this warning, and the prior job is stopped:

    e2e-db2-with-bulkdata (11)
    Canceling since a higher priority waiting request for 'integration-3014' exists

  • Tracing the IBM FHIR Server file access on MacOSX

    If you want to trace the file access of the IBM FHIR Server, you can use fs_usage using sudo.

    1. Find the Server
    PS=$(ps -ef | grep -i fhir-server | grep -v grep | awk '{print $2}')
    sudo fs_usage -w ${PS} | grep -i json
    
    1. Check what files are used
    $ sudo fs_usage -w ${PS} | tee out.log | grep -i json
    12:11:02.328326  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000023   java.2179466
    12:11:02.328342  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000008   java.2179466
    12:11:02.328360  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000008   java.2179466
    12:11:02.328368  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000005   java.2179466
    12:11:02.332330  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000020   java.2179466
    12:11:02.332437  open              F=109      (R___________)  config/default/fhir-server-config.json                                                                                                                          0.000085   java.2179466
    12:11:02.350576  stat64                                 config/default/fhir-server-config.json                                                                                                                                0.000016   java.2179466
    

    You can then see what operations are executed on the fhir-server-config.json or any other file the server accesses

    Reference

  • Using Docker and Kafka with IBM FHIR Server Audit for Testing

    Thie attached GIST is a package of Kubernetes yaml files and Java code to test locally with Docker/Kubernetes with the IBM FHIR Server.

    You’ll want to kubectl apply -f <filename> for each of the files.

    Then apply the fhir-server-config-snippet.json to your fhir-server-config.json

    And run

    kubectl config use-context docker-desktop
    kubectl -n fhir-cicd-ns port-forward kafka-0 9092

    Thanks to https://github.com/d1egoaz/minikube-kafka-cluster for the inspiration.

  • DockerHub API to Get Statistics

    I had to gather statistics for my team’s repo. Here is a small recipe to get the pull count for a specific repository .

    1. Setup the Bearer Token.
    export DOCKER_USERNAME="prb112"
    export DOCKER_PASSWORD="<<>>"
    
    export TOKEN=$(curl -s -H "Content-Type: application/json" \
       -X POST -d '{"username": "'${DOCKER_USERNAME}'", "password": "'${DOCKER_PASSWORD}'"}' \
       https://hub.docker.com/v2/users/login/ | jq -r .token)
    
    1. Pull the stats:
    curl -L -H "Authorization: Bearer $TOKEN" \
        https://hub.docker.com/v2/repositories/ibmcom/ibm-fhir-server \
        | jq -r '.pull_count'
    574725
    

    Thanks to Arthur Koziel’s Blog

  • Using the HL7 FHIR® Da Vinci Health Record Exchange $member-match operation in IBM FHIR Server

    HL7 FHIR® Da Vinci Health Record Exchange (HREX) is an FHIR Implementation Guide at version 0.2.0 – STU R1 – 2nd ballot. The HREX Implementation Guide is a foundational guide for all of the Da Vinci guides which support US payer, provider, member and HIPAA covered entity data exchange. The guide defines "FHIR profiles, operations" and depends on HL7 FHIR® US Core Implementation Guide STU3 Release 3.1.0. In an issue, I implemented this profile and operation.

    As members (Patient) move from one plan (Coverage) to another plan (Coverage) or provider (Provider). To faciliates this exchange across boundaries, HREX introduces the $member-match operation which allows one health plan to retrieve a unique identifier for a member from another health plan using a member’s demographic and coverage information. This identifier can then be used to perform subsequent queries and operations. Members implementing a deterministic match require a match on member id or subscriber id at a minimum.

    The IBM FHIR Server team has implemented the HREX Implementation Guide and Operation as two modules: fhir-ig-davinci-hrex HREX 0.2.0 Implementation Guide and fhir-operation-member-match. The operation depends on fhir-ig-us-core US Core 3.1.1. Note, in the main branch the fhir-ig-us-core supports 3.1.1 and 4.0.0. These three modules are to be released to Maven Central when the next version is tagged.

    The $member-match operation executes on the Resource Type – Patient/$member-match. The operation implements the IBM FHIR Server Extended Operation framework using the Java Service Loader.

    operation-framework

    The $member-match provides a default strategy to execute strategy executes a series of Searches on the local FHIR Server to find a Patient on the system with a Patient and Coverage (to-match). The strategy is extensible by extending the strategy.

    MemberMatch Framework

    If the default strategy is not used, the Java Service Loader must be used by the new strategy. To register a JAR, META-INF/services/com.ibm.fhir.operation.davinci.hrex.provider.strategy.MemberMatchStrategy the file must point to the package and class that implements MemberMatchStrategy. Alternatively, AbstractMemberMatch or DefaultMemberMatchStrategy may be used as a starting point.

    For implementers, there is an existing AbstractMemberMatch which provides a template and series of hooks to extend:

    MemberMatchResult implements a light-weight response which gets translated to Output Parameters or OperationOutcomes if there is no match or multiple matches.

    More advanced processing of the input and validation is shown in DefaultMemberMatchStrategy which processes the input resources to generate SearchParameter values to query the local IBM FHIR Server.

    It’s highly recommended to extend the default implementation and override the getMemberMatchIdentifier for the strategy:

    The $member-match operation is configured for each tenant using the respective fhir-server-config.json. The configuration is rooted under the path fhirServer/operations/membermatch.

    Name Default Description
    enabled true Enables or Disable the MemberMatch operation for the tenant
    strategy default The key used to identify the MemberMatchStrategy that is loaded using the Java Service Loader
    extendedProps true Used by custom MemberMatchStrategy implementations
    {
        "__comment": "",
        "fhirServer": {
            "operations": {
                "membermatch": {
                    "enabled": true,
                    "strategy": "default",
                    "extendedProps": {
                        "a": "b"
                    }
                }
            }
        }
    }
    

    Recipe

    1. Prior to 4.10.0, build the Maven Projects and the Docker Build. You should see [INFO] BUILD SUCCESS after each Maven build, and docker.io/ibmcom/ibm-fhir-server:latest when the Docker build is successful.
    mvn clean install -f fhir-examples -B -DskipTests -ntp
    mvn clean install -f fhir-parent -B -DskipTests -ntp
    docker build -t ibmcom/ibm-fhir-server:latest fhir-install
    
    1. Create a temporary directory for the dependencies that we’ll mount to userlib/, so it looks at:
    userlib\
        fhir-ig-us-core-4.10.0.jar
        fhir-ig-davinci-hrex-4.10.0.jar
        fhir-operation-member-match-4.10.0.jar
    
    export WORKSPACE=~/git/wffh/2021/fhir
    mkdir -p ${WORKSPACE}/tmp/userlib
    cp -p conformance/fhir-ig-davinci-hrex/target/fhir-ig-davinci-hrex-4.10.0-SNAPSHOT.jar ${WORKSPACE}/tmp/userlib/
    cp -p conformance/fhir-ig-us-core/target/fhir-ig-us-core-4.10.0-SNAPSHOT.jar ${WORKSPACE}/tmp/userlib/
    cp -p operation/fhir-operation-member-match/target/fhir-operation-member-match-4.10.0-SNAPSHOT.jar ${WORKSPACE}/tmp/userlib/
    

    Note, the use of snapshot as these are not yet released.

    1. Download the fhir-server-config.json.
    curl -L -o fhir-server-config.json \
        https://raw.githubusercontent.com/IBM/FHIR/main/fhir-server/liberty-config/config/default/fhir-server-config.json
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  8423  100  8423    0     0  40495      0 --:--:-- --:--:-- --:--:-- 40301
    
    1. Start the Docker container, and capture the container id. It’s going to take a few moments to start up as it lays down the test database.
    docker run -d -p 9443:9443 -e BOOTSTRAP_DB=true \
      -v $(pwd)/fhir-server-config.json:/config/config/default/fhir-server-config.json \
      -v ${WORKSPACE}/tmp/userlib:/config/userlib/ \
      ibmcom/ibm-fhir-server:latest
    4334334a3a6ad395c4b600e14c8563d7b8a652de1d3fdf14bc8aad9e6682cc02
    
    1. Check the logs until you see:
    docker logs 4334334a3a6ad395c4b600e14c8563d7b8a652de1d3fdf14bc8aad9e6682cc02
    ...
    [6/16/21, 15:31:34:533 UTC] 0000002a FeatureManage A   CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 17.665 seconds.
    
    1. Download and update the Sample Data
    curl -L 'https://raw.githubusercontent.com/IBM/FHIR/main/conformance/fhir-ig-davinci-hrex/src/test/resources/JSON/020/Parameters-member-match-in.json' \
    -o Parameters-member-match-in.json
    
    1. Split the resource out from the sample:
    cat Parameters-member-match-in.json | jq -r '.parameter[0].resource' > Patient.json
    cat Parameters-member-match-in.json | jq -r '.parameter[1].resource' > Coverage.json
    
    1. Load the Sample Data bundle to the IBM FHIR Server
    curl -k --location --request PUT 'https://localhost:9443/fhir-server/api/v4/Patient/1' \
    --header 'Content-Type: application/fhir+json' \
    --header 'Prefer: return=representation' \
    --user "fhiruser:${DUMMY_PASSWORD}" \
    --data-binary  "@Patient.json"
    
    curl -k --location --request PUT 'https://localhost:9443/fhir-server/api/v4/Coverage/9876B1' \
    --header 'Content-Type: application/fhir+json' \
    --header 'Prefer: return=representation' \
    --user "fhiruser:${DUMMY_PASSWORD}" \
    --data-binary  "@Coverage.json"
    

    Note, DUMMY_PASSWORD should be previously set to your server’s password.

    1. Execute the Member Match
    curl -k --location --request POST 'https://localhost:9443/fhir-server/api/v4/Patient/$member-match' \
    --header 'Content-Type: application/fhir+json' \
    --header 'Prefer: return=representation' \
    --user "fhiruser:${DUMMY_PASSWORD}" \
    --data-binary  "@Parameters-member-match-in.json" -o response.json
    

    When you execute the operation, it runs two visitors across the Parameters input to generate searches against the persistence store:

    • DefaultMemberMatchStrategy.MemberMatchPatientSearchCompiler – Enables the Processing of a Patient Resource into a MultivaluedMap, which is subsequently used for the Search Operation. Note there are no SearchParameters for us-core-race, us-core-ethnicity, us-core-birthsex these elements in the US Core Patient profile. The following fields are combined in a Search for the Patient:

        - Patient.identifier
        - Patient.name
        - Patient.telecom
        - Patient.gender
        - Patient.birthDate
        - Patient.address
        - Patient.communication
      
    • DefaultMemberMatchStrategy.MemberMatchCovergeSearchCompiler Coverage is a bit unique here. It’s the CoverageToMatch – details of prior health plan coverage provided by the member, typically from their health plan coverage card and has dubious provenance. The following fields are combined in a Search for the Coverage.

        - Coverage.identifier
        - Coverage.beneficiary
        - Coverage.payor
        - Coverage.subscriber
        - Coverage.subscriberId
      

    Best wishes with MemberMatch.

  • Checking fillfactor for Postgres Tables

    My teammate implemented Adjust PostgreSQL fillfactor for tables involving updates #1834, which adjusts the amount of data in each storefile.

    Per Cybertec, fillfactor is important to "INSERT operations pack table pages only to the indicated percentage; the remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page as the original, which is more efficient than placing it on a different page." Link as such my teammate implemented in a PR a change to adjust the fillfactor to co-locate INSERT/UPDATES into the same space.

    Query

    If you want to check your fillfactor settings, you can can check the pg_class admin table to see your settings using the following scripts:

    SELECT 
    	pc.relname as "Table Name", 
    	pc.reloptions As "Settings on Table",
    	pc.relkind as "Table Type"
    FROM pg_class AS pc
    INNER JOIN pg_namespace AS pns 
    	ON pns.oid = pc.relnamespace
    WHERE pns.nspname = 'test1234'
    	AND pc.relkind = 'r';
    

    Note

    1. relkind represents the object type char r is a table. A good reference is the following snippet: relkind char r = ordinary table, i = index, S = sequence, v = view, m = materialized view, c = composite type, t = TOAST table, f = foreign table
    2. nspname is the schema you are checking for the fillfactor values.

    Results

    You see the value:

    basic_resources,{autovacuum_vacuum_scale_factor=0.01,autovacuum_vacuum_threshold=1000,autovacuum_vacuum_cost_limit=2000,fillfactor=90},'r'
    

    References

  • GitHub Action Workflow Tips

    I went through a Knowledge Transfer to a teammate who is implementing GitHub Actions and workflows in their repository. My team has been working with GitHub actions since they became available to developers. You can see my team’s workflows at https://github.com/IBM/FHIR/tree/main/.github/workflows and our automation scripts at https://github.com/IBM/FHIR/tree/main/build

    Here are my tips:

    Pushing Changes to GH Pages

    If you need to push changes back to GitHub, I recommend you checkout your code to a working sub folder and build in that subfolder, and copy the artifacts back to another subfolder and then push those changes (after a git add and commit with signature back to git)

    Triggering GH Pages Build

    In my build and release process, I generate my own website artifacts. I then need to call the API to trigger the GH Pages workflow as it is not automatically triggered by pushing the artifacts directly to the gh-pages branch. This trick starts the deployment of the branch to the gh-pages environment. It uses curl and the git hubs api.

    Grabbing the Current Tag

    I found this helpful to grab the tag and inject it into the Git Hub Environment variables in subsequent Workflow Job steps.

    Conditionally Skip based on a Label

    You should be able to skip your workflow at any given point, and you can add a conditional to skip, for instance ci-skip which should be a label in your repo.

    Capture your logs and Upload no matter what

    Workflows are designed to skip dependent steps on failure, Step B fails because Step A failed. It’s worth adding at the end of your workflow a step to gather any debug logs and pack them up, upload in all conditions.

    The condition is set with if: always().

    Lock your ‘uses’ workflow versions

    Lock in your workflow’s uses on a specific version. For instance, you can lock in on action/upload-artifact or action/checkout, and use the organization/repository to check the documentation on GitHub. Here are some key Actions and the links to their Repos.

    action/checkoutcheckout
    action/upload-artifactupload artifacts