This article walks folks through the process of using the IBM FHIR Server’s helm chart with Docker Desktop Kubernetes and getting it up and running.
Category: IBM FHIR Server
-
Bulk Data Configurations for IBM FHIR Server’s Storage Providers
As of IBM FHIR Server 4.10.2…
A colleague of mine is entering into the depths of the IBM FHIR Server’s Bulk Data feature. Each tenant in the IBM FHIR Server may specify multiple storageProviders. The default tenant is assumed, unless specified with the Http Headers
X-FHIR-BULKDATA-PROVIDER
andX-FHIR-BULKDATA-PROVIDER-OUTCOME
. Each tenant’s configuration may mix the different providers, however each provider is only of a single type. For instance,minio
isaws-s3
anddefault
isfile
andaz
isazure-blob
.Note, type
http
is only applicable to$import
operations. Export is only supported with s3, azure-blob and file.File Storage Provider Configuration
The
file
storage provider uses a directory local to the IBM FHIR Server. The fileBase is an absolute path that must exist. Each import inputUrl is a relative path from the fileBase. The File Provider is available for both import and export. Authentication is not supported.{ "__comment": "IBM FHIR Server BulkData - File Storage configuration", "fhirServer": { "bulkdata": { "__comment" : "The other bulkdata configuration elements are skipped", "storageProviders": { "default": { "type": "file", "fileBase": "/opt/wlp/usr/servers/fhir-server/output", "disableOperationOutcomes": true, "duplicationCheck": false, "validateResources": false, "create": false } } } } }
An example request is:
{ "resourceType": "Parameters", "id": "30321130-5032-49fb-be54-9b8b82b2445a", "parameter": [ { "name": "inputSource", "valueUri": "https://my-server/source-fhir-server" }, { "name": "inputFormat", "valueString": "application/fhir+ndjson" }, { "name": "input", "part": [ { "name": "type", "valueString": "AllergyIntolerance" }, { "name": "url", "valueUrl": "r4_AllergyIntolerance.ndjson" } ] }, { "name": "storageDetail", "valueString": "file" } ] }
HTTPS Storage Provider Configuration
The
http
storage provider uses a set of validBaseUrls to confirm $import inputUrl is acceptable. The Https Provider is available for import. Authentication is not supported.{ "__comment": "IBM FHIR Server BulkData - Https Storage configuration (Import only)", "fhirServer": { "bulkdata": { "__comment" : "The other bulkdata configuration elements are skipped", "storageProviders": { "default": { "type": "https", "__comment": "The whitelist of valid base urls, you can always disable", "validBaseUrls": [], "disableBaseUrlValidation": true, "__comment": "You can always direct to another provider", "disableOperationOutcomes": true, "duplicationCheck": false, "validateResources": false } } } } }
An example request is:
{ "resourceType": "Parameters", "id": "30321130-5032-49fb-be54-9b8b82b2445a", "parameter": [ { "name": "inputSource", "valueUri": "https://my-server/source-fhir-server" }, { "name": "inputFormat", "valueString": "application/fhir+ndjson" }, { "name": "input", "part": [ { "name": "type", "valueString": "AllergyIntolerance" }, { "name": "url", "valueUrl": "https://validbaseurl.com/r4_AllergyIntolerance.ndjson" } ] }, { "name": "storageDetail", "valueString": "https" } ] }
Azure Storage Provider Configuration
The
azure-blob
storage provider uses a connection string from the Azure Blob configuration. The bucketName is the blob storage name. The azure-blob provider supports import and export. Authentication and configuration are built into the Connection string."__comment": "IBM FHIR Server BulkData - Azure Blob Storage configuration", "fhirServer": { "bulkdata": { "__comment" : "The other bulkdata configuration elements are skipped", "storageProviders": { "default": { "type": "azure-blob", "bucketName": "fhirtest", "auth": { "type": "connection", "connection": "DefaultEndpointsProtocol=https;AccountName=fhirdt;AccountKey=ABCDEF==;EndpointSuffix=core.windows.net" }, "disableBaseUrlValidation": true, "disableOperationOutcomes": true } } } } }```
An example request is:
{ "resourceType": "Parameters", "id": "30321130-5032-49fb-be54-9b8b82b2445a", "parameter": [ { "name": "inputSource", "valueUri": "https://my-server/source-fhir-server" }, { "name": "inputFormat", "valueString": "application/fhir+ndjson" }, { "name": "input", "part": [ { "name": "type", "valueString": "AllergyIntolerance" }, { "name": "url", "valueUrl": "r4_AllergyIntolerance.ndjson" } ] }, { "name": "storageDetail", "valueString": "azure-blob" } ] }
S3 Storage Provider Configuration
The
aws-s3
storage provider supports import and export. The bucketName, location, auth style (hmac, iam), endpointInternal, endpointExternal are separate values in the configuration. Note, enableParquet is obsolete.{ "__comment": "IBM FHIR Server BulkData - AWS/COS/Minio S3 Storage configuration", "fhirServer": { "bulkdata": { "__comment" : "The other bulkdata configuration elements are skipped", "storageProviders": { "default": { "type": "aws-s3", "bucketName": "myfhirbucket", "location": "us-east-2", "endpointInternal": "https://s3.us-east-2.amazonaws.com", "endpointExternal": "https://myfhirbucket.s3.us-east-2.amazonaws.com", "auth": { "type": "hmac", "accessKeyId": "AKIAAAAF2TOAAATMAAAO", "secretAccessKey": "mmmUVsqKzAAAAM0QDSxH9IiaGQAAA" }, "enableParquet": false, "disableBaseUrlValidation": true, "exportPublic": false, "disableOperationOutcomes": true, "duplicationCheck": false, "validateResources": false, "create": false, "presigned": true, "accessType": "host" } } } } }
An example request is:
{ "resourceType": "Parameters", "id": "30321130-5032-49fb-be54-9b8b82b2445a", "parameter": [ { "name": "inputSource", "valueUri": "https://my-server/source-fhir-server" }, { "name": "inputFormat", "valueString": "application/fhir+ndjson" }, { "name": "input", "part": [ { "name": "type", "valueString": "AllergyIntolerance" }, { "name": "url", "valueUrl": "r4_AllergyIntolerance.ndjson" } ] }, { "name": "storageDetail", "valueString": "aws-s3" } ] }
Note, you can exchange aws-s3 and ibm-cos as the parameter.where(name=’storageDetail’). These are treated interchangeably.
There are lots of configurations that are possible, I hope this helps you.
-
Not Yet Another Docker to Rancher Desktop Alternative
With the change to Docker, Docker is changing its license going forward with Docker Desktop as noted in their license and blog. Much like a former colleague of mine’s article YADPBP: Yet Another Docker to Podman Blog Post, I have entered into the Docker Desktop migration.
I’ve tried minikube, microk8s, podman, Lima-vm and Rancher Desktop. Many of these solutions run a single container, such as multipass. In fact, I tried using Multipass with Podman installed inside of the multipass vm. I found the networking and forwarding needs while testing multiple containers to a local dev environment was a pain. I spent a few days working with minikube, microk9s, podman and ended up on Racher Desktop.
Rancher Desktop has flavors for Mac and Linux (I don’t run Windows as a base OS anymore). I downloaded one of the tech preview releases from GitHub and installed. It’s fairly simple, and they have a straight-forward readme. One trick, be sure to install / setup
nerdctl
.select nerdctl nerdctl
is a Docker-compatible commandline replacement and integrate seamlessly with Rancher Desktop.~/$ nerdctl run -p 9443:9443 --name fhir -e BOOTSTRAP_DB=true ibmcom/ibm-fhir-server docker.io/ibmcom/ibm-fhir-server:latest: resolved |++++++++++++++++++++++++++++++++++++++| manifest-sha256:41f6894fa546899e02e4a8d2370bb6910eb72ed77ec58ae06c3de5e12f3ebb1c: done |++++++++++++++++++++++++++++++++++++++| config-sha256:3c912cc1a5b7c69ae15c9b969ae0085839b926e825b6555a28518458f4bd4935: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:06038631a24a25348b51d1bfc7d0a0ee555552a8998f8328f9b657d02dd4c64c: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:661abc6f8cb3c6d78932032ce87eb294f43f6eca9daa7681816d83ee0f62fb3d: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:e74a68c65fb24cc6fabe5f925d450cae385b2605d8837d5d7500bdd5bad7f268: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:262268b65bd5f33784d6a61514964887bc18bc00c60c588bc62bfae7edca46f1: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:d5e08b0b786452d230adf5d9050ce06b4f4d73f89454a25116927242507b603b: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:50dc68e56d6bac757f0176b8b49cffc234879e221c64a8481805239073638fb4: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:1831e571c997bd295bd5ae59bfafd69ba942bfe9e63f334cfdc35a8c86886d47: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:d29b7147ca6a2263381a0e4f3076a034b223c041d2a8f82755c30a373bb6ade7: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:a2643035bb64ff12bb72e7b47b1d88e0cdbc3846b5577a9ee9c44baf7c707b20: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:3ba05464ea94778cacf3f55c7b11d7d41293c1fc169e9e290b48e2928eaad779: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:6fb3372b06eb12842f94f14039c1d84608cbed52f56d3862f2c545d65e784a00: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:4cf8515f0f05c79594b976e803ea54e62fcaee1f6e5cfadb354ab687b758ed55: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:4debf1aa73b3e81393dc46e2f3c9334f6400e5b0160beb00196d0e5803af1e63: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:ecaacecff5f80531308a1948790550b421ca642f57b78ea090b286f74f3a7ba1: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:1ccf6767107a3807289170cc0149b6f60b5ed2f52ba3ba9b00b8d320951c4317: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:8144e53119b8ac586492370a117aa83bc31cf439c70663a58894fc1dfe9a4e08: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:16bdcde4e18e3d74352c7e42090514c7f2e0213604c74e5a6bf938647c195546: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:e9726188008a01782dcb61103c7d892f605032386f5ba7ea2acbcb6cf9770a0e: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:c37730e2eaef6bbb446d2ebe5ec230eef4abdb36e6153778d1ae8416f5543e7d: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:35d3a4502906b8e3a4c962902925f8e1932c8fb012fa84e875494049d8a6b324: done |++++++++++++++++++++++++++++++++++++++| elapsed: 94.3s total: 696.1 (7.4 MiB/s) bootstrap.sh - [INFO]: 2021-12-07_19:42:09 - Current directory: /opt/ibm-fhir-server bootstrap.sh - [INFO]: 2021-12-07_19:42:09 - Performing Derby database bootstrapping 2021-12-07 19:42:11.348 00000001 INFO .common.JdbcConnectionProvider Opening connection to database: jdbc:derby:/output/derby/fhirDB;create=true 2021-12-07 19:42:13.138 00000001 WARNING ls.pool.PoolConnectionProvider Get connection took 1.791 seconds 2021-12-07 19:42:13.382 00000001 INFO m.fhir.schema.app.LeaseManager Requesting update lease for schema 'APP' [attempt 1]
When you see
ready to run a smarter planet
, the server is started.[12/7/21, 19:45:00:437 UTC] 00000027 FeatureManage A CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 20.229 seconds.
When running the
$healthcheck
, you see:curl -u fhiruser:change-password https://localhost:9443/fhir-server/api/v4/\$healthcheck -k -H "Prefer: return=OperationOutcome" {"resourceType":"OperationOutcome","issue":[{"severity":"information","code":"informational","details":{"text":"All OK"}}]}
Racher Desktop is up… time to run with it…
-
Checking fillfactor for Postgres Tables
My teammate implemented Adjust PostgreSQL fillfactor for tables involving updates #1834, which adjusts the amount of data in each storefile.
Per Cybertec,
fillfactor
is important to "INSERT operations pack table pages only to the indicated percentage; the remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page as the original, which is more efficient than placing it on a different page." Link as such my teammate implemented in a PR a change to adjust the fillfactor to co-locate INSERT/UPDATES into the same space.Query
If you want to check your fillfactor settings, you can can check the
pg_class
admin table to see your settings using the following scripts:SELECT pc.relname as "Table Name", pc.reloptions As "Settings on Table", pc.relkind as "Table Type" FROM pg_class AS pc INNER JOIN pg_namespace AS pns ON pns.oid = pc.relnamespace WHERE pns.nspname = 'test1234' AND pc.relkind = 'r';
Note
- relkind represents the object type char
r
is a table. A good reference is the following snippet:relkind char r = ordinary table, i = index, S = sequence, v = view, m = materialized view, c = composite type, t = TOAST table, f = foreign table
nspname
is the schema you are checking for the fillfactor values.
Results
You see the value:
basic_resources,{autovacuum_vacuum_scale_factor=0.01,autovacuum_vacuum_threshold=1000,autovacuum_vacuum_cost_limit=2000,fillfactor=90},'r'
References
- relkind represents the object type char
-
Job and Bulk Data APIs
Here are some short cut APIs for Open Liberty API and IBM FHIR Server’s batch feature.
-
IBM Digital Developer Conference: Hybrid Cloud – Integrating Healthcare Data in a Serverless World
My lab is now live and available on the IBM Digital Developer Conference. In my session, developers integrate a healthcare data application using IBM FHIR Server with OpenShift serverless, to create and respond to actual healthcare scenarios.
The lab materials Link to the lab material https://prb112.github.io/healthcare-serverless/ and you need to sign up for the lab using the following instructions.
Signing up for the Lab
To get access to the lab environment, follow the following instructions:
1. Get added to the IBM Cloud account “DEGCLOUD” using the following app:– https://account-invite.mybluemix.net/
– Enter Lab key: welcome
– Enter IBM ID: the email you used to sign up for the Digital Developer Conference
2. You will then get an invite message in your email which you must accept to continue. It doesn’t matter if you see an “oops” message here.
3. Then, you can use the following app to get access to a cluster.
– Open Link https://ddc-healthcare-lab.mybluemix.net
– Enter Lab key: oslab
– Enter IBM ID: The email you used to create your IBM Cloud account
To jump right to the session you can go right to the IBM website https://developer.ibm.com/conferences/digital-developer-conference-hybrid-cloud/track-5-labs/s2-healthcare-data-serverless/
Video of the Walk Through
-
Digital Developer Conference – Hybrid Cloud: Integrating Healthcare Data in a Serverless World
Recently I developed and presented this lab… which gets released in late September 2021.
In this lab, developers integrate a healthcare data application using IBM FHIR Server with Red Hat OpenShift Serverless to create and respond to a healthcare scenario.
This lab is a companion to the session Integrating Healthcare Data in a Serverless World at Digital Developer Conference – Hybrid Cloud.
The content for this lab can be found at https://ibm.biz/ibm-fhir-server-healthcare-serverless.
Have fun! Enjoy… Ask Questions… I’m here to help.
-
Recipe: Azure Postgres with IBM FHIR Server Bulk Data
One of the prerequisites for setting up IBM FHIR Server Bulk Data is setting up max_prepared_transactions since the IBM FHIR Server leverages Open Liberty Java Batch which uses an XA Transaction.
If you are using Azure, here are the steps for updating your Postgres resource.
Navigate to the Azure Portal
Find your Postgres resource
Update your Server Parameters max_prepared_transactions to 200 (anything non-zero is recommended to enable XA)
Click Save
Click Overview
Click Restart
Click On Activity Log
Wait until Postgres is restarted
Restart your IBM FHIR Server, and you are ready to use the Bulk Data feature.
If you don’t do the setup, you’ll see a log like the following:
[9/2/21, 1:49:38:257 UTC] [step1 partition0] com.ibm.fhir.bulkdata.jbatch.listener.StepChunkListener StepChunkListener: job[bulkexportfastjob/8/15] --- javax.transaction.RollbackException com.ibm.jbatch.container.exception.TransactionManagementException: javax.transaction.RollbackException at com.ibm.jbatch.container.transaction.impl.JTAUserTransactionAdapter.commit(JTAUserTransactionAdapter.java:108) at com.ibm.jbatch.container.controller.impl.ChunkStepControllerImpl.invokeChunk(ChunkStepControllerImpl.java:656) at com.ibm.jbatch.container.controller.impl.ChunkStepControllerImpl.invokeCoreStep(ChunkStepControllerImpl.java:795) at com.ibm.jbatch.container.controller.impl.BaseStepControllerImpl.execute(BaseStepControllerImpl.java:295) at com.ibm.jbatch.container.controller.impl.ExecutionTransitioner.doExecutionLoop(ExecutionTransitioner.java:118) at com.ibm.jbatch.container.controller.impl.WorkUnitThreadControllerImpl.executeCoreTransitionLoop(WorkUnitThreadControllerImpl.java:96) at com.ibm.jbatch.container.controller.impl.WorkUnitThreadControllerImpl.executeWorkUnit(WorkUnitThreadControllerImpl.java:178) at com.ibm.jbatch.container.controller.impl.WorkUnitThreadControllerImpl$AbstractControllerHelper.runExecutionOnThread(WorkUnitThreadControllerImpl.java:503) at com.ibm.jbatch.container.controller.impl.WorkUnitThreadControllerImpl.runExecutionOnThread(WorkUnitThreadControllerImpl.java:92) at com.ibm.jbatch.container.util.BatchWorkUnit.run(BatchWorkUnit.java:113) at com.ibm.ws.context.service.serializable.ContextualRunnable.run(ContextualRunnable.java:79) at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:238) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:866) Caused by: javax.transaction.RollbackException at com.ibm.tx.jta.impl.TransactionImpl.stage3CommitProcessing(TransactionImpl.java:978) at com.ibm.tx.jta.impl.TransactionImpl.processCommit(TransactionImpl.java:778) at com.ibm.tx.jta.impl.TransactionImpl.commit(TransactionImpl.java:711) at com.ibm.tx.jta.impl.TranManagerImpl.commit(TranManagerImpl.java:165) at com.ibm.tx.jta.impl.TranManagerSet.commit(TranManagerSet.java:113) at com.ibm.tx.jta.impl.UserTransactionImpl.commit(UserTransactionImpl.java:162) at com.ibm.tx.jta.embeddable.impl.EmbeddableUserTransactionImpl.commit(EmbeddableUserTransactionImpl.java:101) at com.ibm.ws.transaction.services.UserTransactionService.commit(UserTransactionService.java:72) at com.ibm.jbatch.container.transaction.impl.JTAUserTransactionAdapter.commit(JTAUserTransactionAdapter.java:101)
Go back and enable max_prepared_transactions
References
-
Demonstrating the FHIR Extended Operation Composition/$document
Per the specification, a client can ask a server to generate a fully bundled document from a composition resource. I’ve pulled together a Postman to help demonstrate this feature on the IBM FHIR Server.
- Download the Postman
2. Update the SERVER_HOSTNAME
3. Update the Authorization for your username and password
4. Click Tests > Run
5. Click Run
6. You’ll see the tests run.
7. Click on the Tests of Interest, and then check the curl you are interested, such as:
curl --location --request GET 'https://localhost:9443/fhir-server/api/v4/Composition/17b83b99f91-3c0d6274-0498-4fe4-999e-ba8574f85b09/$document?persist=true' \ --header 'Content-Type: application/fhir+json' \ --header 'Authorization: Basic .....'
Good Luck with Composition/$document.