This was super helpful for debugging S3 files/folders.
If you hit this issue, per the GitHub community it’s a Windows issue that can be worked around with a Pagefile. link
[ERROR] OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000700000000, 553648128, 0) failed; error=’The paging file is too small for this operation to complete’ (DOS error/errno=1455)
Here are some examples I generated from a demo with the IBM FHIR Server
In IBM FHIR Server 4.6.0, IBM FHIR Server has refactored the supported Bulk Data operations that support the HL7 FHIR BulkDataAccess IG: STU1 and the Proposal for $import Operation.
To demonstrate the new features in a quick script, use the following to setup an S3 demonstration from the start-to-end.
Setup IBM FHIR Server
Load Sample Data
The IBM FHIR Server supports audit events for FHIR operations (CREATE-READ-UPDATE-DELETE-OPERATION) in Cloud Auditing Data Federation (CADF) and HL7 FHIR AuditEvent and pushing the events to an Apache Kafka backend. You can read more about it on the IBM FHIR Server site.
Let’s spin up an IBM FHIR Server with fhir-audit and see what we get with a running container.
A question I had recently was “How to use IBM FHIR Server” with IBM Cloud Functions…. Here is the recipe:
Question: How do I decode the Job Id from the Bulk Data Export or Import on the IBM FHIR Server? You can use the following code to walk through and decode your job id using your passowrd.