Blog

  • Learning Resources for Operators – First Two Weeks Notes

    To quote the Kubernetes website, “The Operator pattern captures how you can write code to automate a task beyond what Kubernetes itself provides.” The following is an compendium to use while Learning Operators.

    The defacto SDK to use is the Operator SDK which provides HELM, Ansible and GO scaffolding to support your implementation of the Operator pattern.

    The following are education classes on the OperatorSDK

    When Running through the CO0201EN intermediate operators course, I did hit the case where I had to create a ClusterRole and ClusterRoleBinding for the ServiceAccount, here is a snippet that might helper others:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      namespace: memcached-operator-system
      name: service-reader-cr-mc
    rules:
    - apiGroups: ["cache.bastide.org"] # "" indicates the core API group
      resources: ["memcacheds"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      namespace: memcached-operator-system
      name: ext-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: service-reader-cr-mc
    subjects:
    - kind: ServiceAccount
      namespace: memcached-operator-system
      name: memcached-operator-controller-manager

    The reason for the above, I missed adding a kubebuilder declaration:

    //+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
    //+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch

    Thanks to https://stackoverflow.com/a/60334649/1873438

    The following are articles worth reviewing:

    The following are good Go resources:

    1. Go Code Comments – To write idiomatic Go, you should review the Code Review comments.
    2. Getting to Go: The Journey of Go’s Garbage Collector – The reference for Go and Garbage Collection in go
    3. An overview of memory management in Go – good overview of Go Memory Management
    4. Golang: Cost of using the heap – net 1M allocation seems to stay in the stack, outside it seems to be on the heap
    5. golangci-lint – The aggregated linters project is worthy of an installation and use. It’ll catch many issues and has a corresponding GitHub Action.
    6. Go in 3 Weeks A comprehensive training for Go. Companion to GitHub Repo
    7. Defensive Coding Guide: The Go Programming Language

    The following are good OpenShift resources:

    1. Create OpenShift Plugins – You must have a CLI plug-in file that begins with oc- or kubectl-. You create a file and put it in /usr/local/bin/
    2. Details on running Code Ready Containers on Linux – The key hack I learned awas to ssh -i ~/.crc/machines/crc/id_ecdsa core@<any host in the /etc/hosts>
      1. I ran on VirtualBox Ubuntu 20.04 with Guest Additions Installed
      2. Virtual Box Settings for the Machine – 6 CPU, 18G
        1. System > Processor > Enable PAE/NX and Enable Nested VT-X/AMD-V (which is a must for it to work)
        1. Network > Change Adapter Type to virtio-net and Set Promiscuous Mode to Allow VMS
      3. Install openssh-server so you can login remotely
      4. It will not install without a windowing system, so I have the default windowing environment installed.
      5. Note, I still get a failure on startup complaining about a timeout. I waited about 15 minutes post this, and the command oc get nodes –context admin –cluster crc –kubeconfig .crc/cache/crc_libvirt_4.10.3_amd64/kubeconfig now works.
    3. CRC virsh cheatsheet – If you are running Code Ready Containers and need to debug, you can use the virsh cheatsheet.
  • Hack: Fast Forwarding a Video

    I had to watch 19 hours of slow paced videos for a training on a new software product (at least new to me). I like fast paced trainings… enter a browser hack.

    In Firefox, Navigate to Tools > Browser Tools > Web Developer Tools

    Click Console

    Type the following snippet to find the first video on a page, and change the playback rate, and Click Enter.

    document.getElementById(document.getElementsByTagName('video').item(0).id).playbackRate = 4.0

    Note, 4.0 can be unintelligible, you’ll need to tweak the speed to match what you need. I found 2.5 to 3.0 to be very comfortable (you just can’t multitask).

  • The Grit in Processing Unicode Strings with NDJSON

    Unicode is pretty amazing, you can encode strings in single or multibyte characters. Perhaps a smile… 😀 which is U+1F600. It’s pretty cool, so cool you should read If UTF-8 is an 8-bit encoding, why does it need 1-4 bytes? which has four key sequences for UTF8:

       Char. number range  |        UTF-8 octet sequence
          (hexadecimal)    |              (binary)
       --------------------+---------------------------------------------
       0000 0000-0000 007F | 0xxxxxxx
       0000 0080-0000 07FF | 110xxxxx 10xxxxxx
       0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
       0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
    

    Until recently, I’ve been working with NDJSON files as part of the HL7 FHIR: Bulk Data Access IG to export healthcare data and the proposed Import specification to import healthcare data. These files store one JSON per line and delimite with a \n, such as:

    {"resourceType":"Patient"}
    {"resourceType":"Patient"}
    {"resourceType":"Patient"}
    

    The following Java snippet generates a substation set of lines that can be injected into a stream for testing with unicode (and are Newline Delimited).

    StringBuilder line = new StringBuilder();
    for (int codePoint = 32; codePoint <= 0x1F64F; codePoint++) {
        line.append(Character.toChars(codePoint));
        if (codePoint % 64 == 0) {
            line.append("\n");
        }
    }
    System.out.println(line.toString());
    

    As this data is processed aynchronously in on OpenLiberty: JavaBatch as a set of jobs. These jobs process data through a Read(Source)-Checkpoint-Write(Sink) pattern. The pattern ensures enough data is read from the source before a write action on the sink.

    I found that processing the variable data with an unknown unicode set needed a counting stream to keep track of the bytes. The CountingStream acted as a delegate to accumulate bytes, length of the processed values and find the end of a line or end of the file.

    public static class CountingStream extends InputStream {
            private static int LF = '\n';
            private static final long MAX_LENGTH_PER_LINE = 2147483648l;
    
            // 256kb block
            private ByteArrayOutputStream out = new ByteArrayOutputStream(256000);
            private boolean eol = false;
            private long length = 0;
    
            private InputStream delegate;
    
            /**
             * ctor
             * @param in
             */
            public CountingStream(InputStream in) {
                this.delegate = in;
            }
    
            /**
             * reset the line
             */
            public void resetLine() {
                out.reset();
                eol = false;
            }
    
            /**
             * @return the length of the resources returned in the reader
             */
            public long getLength() {
                return length;
            }
    
            /**
             * Gets the String representing the line of bytes.
             * 
             * @return
             * @throws UnsupportedEncodingException
             */
            public String getLine() throws UnsupportedEncodingException {
                String str = new String(out.toByteArray(), "UTF-8");
                if (str.isEmpty()) {
                    str = null;
                }
                return str;
            }
    
            public boolean eol() {
                return eol;
            }
    
            /**
             * Returns the line that is aggregated up until a new line character
             * @return
             * @throws IOException
             */
            public String readLine() throws IOException {
                int r = read();
                while (r != -1) {
                    if (eol()) {
                        eol = false;
                        return getLine();
                    }
                    r = read();
                }
                if (r == -1 && length > 0) {
                    return getLine();
                }
                return getLine();
            }
    
            @Override
            public int read() throws IOException {
                int r = delegate.read();
                if (r == -1) {
                    return -1;
                }
                byte b = (byte) r;
                if (LF == (int) b) {
                    length++;
                    eol = true;
                } else {
                    length++;
                    if (length == MAX_LENGTH_PER_LINE) {
                        throw new IOException("Current Line in NDJSON exceeds limit " + MAX_LENGTH_PER_LINE);
                    }
                    out.write(b);
                }
                return b;
            }
        }
    

    I found one important thing in the delegate, with thanks from a colleague and CERT – you must accumulate the bytes and have a maximum size per line. The CERT article is at STR50-J. Use the appropriate method for counting characters in a string

    The grit here is:

    1. Accumulate: Don’t process a character int read() at a time, accumulate your bytes and defer to the String creation in Java to ensure it’s processed in your project’s encoding.
    2. Set a limit: Don’t infinitely process the data, stop when it violates a set contract.

    If you are doing more complicated processing, say you are streaming from Azure Blob, Amazon S3 or HTTPS and need to process the stream as chunks. You’ll want to do something a bit more complicated.

    The grit here is:

    1. Read Blocks and not the whole stream: Read a block of bytes at a time instead of ‘draining’ the bytes when a sufficient block is retrieved.
    2. Assemble Lines in multiple Block reads.

    The code looks like this:

        public static class CountingStream extends InputStream {
            private static int LF = '\n';
            private static final long MAX_LENGTH_PER_LINE = 2147483648l;
    
            // 256kb block
            private ByteArrayOutputStream out;
            private long length = 0;
    
            private InputStream delegate;
    
            /**
             * 
             * @param out ByteArrayOutputStream caches the data cross reads
             * @param in InputStream is generally the S3InputStream
             */
            public CountingStream(ByteArrayOutputStream out, InputStream in) {
                this.out = out;
                this.delegate = in;
            }
    
            /**
             * Gets the String representing the line of bytes.
             * 
             * @return
             * @throws UnsupportedEncodingException
             */
            public String getLine() throws UnsupportedEncodingException {
                String str = new String(out.toByteArray(), "UTF-8");
                if (str.isEmpty()) {
                    str = null;
                }
                return str;
            }
    
            @Override
            public int read() throws IOException {
                return delegate.read();
            }
    
            /**
             * drains the stream so we don't leave a hanging connection
             * @throws IOException
             */
            public void drain() throws IOException {
                int l = delegate.read();
                while (l != -1) {
                    l = delegate.read();
                }
            }
    
            /**
             * 
             * @param counter
             * @return
             * @throws IOException
             */
            public String readLine() throws IOException {
                int r = read();
                if (r == -1) {
                    return null;
                } else {
                    String result = null;
                    while (r != -1) {
                        byte b = (byte) r;
                        if (LF == (int) b) {
                            length++;
                            r = -1;
                            result = getLine();
                            out.reset();
                        } else {
                            length++;
                            if (length == MAX_LENGTH_PER_LINE) {
                                throw new IOException("Current Line in NDJSON exceeds limit " + MAX_LENGTH_PER_LINE);
                            }
                            out.write(b);
                            r = read();
                        }
                    }
                    return result;
                }
            }
        }
    

    Importantly, the code defers the caching to the EXTERNAL caller, and in this case assembles a window of resources:

        protected void readFromObjectStoreWithLowMaxRange(AmazonS3 c, String b, String workItem) throws FHIRException {
    
            // Don't add tempResources to resources until we're done (we do retry), it's a temporary cache of the Resources
            List<Resource> tempResources = new ArrayList<>();
    
            // number of bytes read.
            long numberOfBytesRead = 0l;
            int totalReads = 0;
            int mux = 0;
    
            // The cached FHIRParserException
            FHIRParserException fpeDownstream = null;
    
            // Closed when the Scope is out. The size is double the read window.
            // The backing array is allocated at creation.
            ByteArrayOutputStream cacheOut = new ByteArrayOutputStream(512000);
            boolean complete = false;
            while (!complete) {
                // Condition: At the end of the file... and it should never be more than the file Size
                // however, in rare circumstances the person may have 'grown' or added to the file
                // while operating on the $import and we want to defensively end rather than an exact match
                // Early exit from the loop...
                long start = this.transientUserData.getCurrentBytes();
                if (this.transientUserData.getImportFileSize() <= start) {
                    complete = true; // NOP
                    break;
                }
    
                // Condition: Window would exceed the maximum File Size
                // Prune the end to -1 off the maximum.
                // The following is 256K window. 256K is used so we only drain a portion of the inputstream.
                // and not the whole file's input stream.
                long end = start + 256000;
                if (end >= this.transientUserData.getImportFileSize()) {
                    end = this.transientUserData.getImportFileSize() - 1;
                    complete = true; // We still need to process the bytes.
                }
    
                // Request the start and end of the S3ObjectInputStream that's going to be retrieved
                GetObjectRequest req = new GetObjectRequest(b, workItem)
                                                .withRange(start, end);
    
                if (LOG.isLoggable(Level.FINE)) {
                    // Useful when debugging edge of the stream problems
                    LOG.fine("S3ObjectInputStream --- " + start + " " + end);
                }
    
                boolean parsedWithIssue = false;
                try (S3Object obj = c.getObject(req);
                        S3ObjectInputStream in = obj.getObjectContent();
                        BufferedInputStream buffer = new BufferedInputStream(in);
                        CountingStream reader = new CountingStream(cacheOut, in)) {
    
                    // The interior block allows a drain operation to be executed finally.
                    // as a best practice we want to drain the remainder of the input
                    // this drain should be at worst 255K (-1 for new line character)
                    try {
                        String resourceStr = reader.readLine();
                        // The first line is a large resource
                        if (resourceStr == null) {
                            this.transientUserData.setCurrentBytes(this.transientUserData.getCurrentBytes() + reader.length);
                            reader.length = 0;
                            mux++;
                        }
    
                        while (resourceStr != null && totalReads < maxRead) {
                            try (StringReader stringReader = new StringReader(resourceStr)) {
                                tempResources.add(FHIRParser.parser(Format.JSON).parse(stringReader));
                            } catch (FHIRParserException fpe) {
                                // Log and skip the invalid FHIR resource.
                                parseFailures++;
                                parsedWithIssue = true;
                                fpeDownstream = fpe;
                            }
    
                            long priorLineLength = reader.length;
                            reader.length = 0;
                            resourceStr = reader.readLine();
    
                            if (!parsedWithIssue) {
                                this.transientUserData.setCurrentBytes(this.transientUserData.getCurrentBytes() + priorLineLength);
                                numberOfBytesRead += reader.length;
                                totalReads++;
                            } else if ((parsedWithIssue && resourceStr != null)
                                    || (parsedWithIssue && 
                                            (this.transientUserData.getImportFileSize() <= this.transientUserData.getCurrentBytes() + priorLineLength))) { 
                                // This is potentially end of bad line
                                // -or-
                                // This is the last line failing to parse
                                long line = this.transientUserData.getNumOfProcessedResources() + totalReads;
                                LOG.log(Level.SEVERE, "readResources: Failed to parse line " + totalReads + " of [" + workItem + "].", fpeDownstream);
                                String msg = "readResources: " + "Failed to parse line " + line + " of [" + workItem + "].";
    
                                ConfigurationAdapter adapter = ConfigurationFactory.getInstance();
                                String out = adapter.getOperationOutcomeProvider(source);
                                boolean collectImportOperationOutcomes = adapter.shouldStorageProviderCollectOperationOutcomes(source)
                                        && !StorageType.HTTPS.equals(adapter.getStorageProviderStorageType(out));
                                if (collectImportOperationOutcomes) {
                                    FHIRGenerator.generator(Format.JSON)
                                        .generate(generateException(line, msg),
                                                transientUserData.getBufferStreamForImportError());
                                    transientUserData.getBufferStreamForImportError().write(NDJSON_LINESEPERATOR);
                                }
                            }
                        }
                    } catch (Exception ex) {
                        LOG.warning("readFhirResourceFromObjectStore: Error proccesing file [" + workItem + "] - " + ex.getMessage());
                        // Throw exception to fail the job, the job can be continued from the current checkpoint after the
                        // problem is solved.
                        throw new FHIRException("Unable to read from S3 during processing", ex);
                    } finally {
                        try {
                            reader.drain();
                        } catch (Exception s3e) {
                            LOG.fine(() -> "Error while draining the stream, this is benign");
                            LOG.throwing("S3Provider", "readFromObjectStoreWithLowMaxRange", s3e);
                        }
                    }
    
                    // Increment if the last line fails
                    if (this.transientUserData.getImportFileSize() <= this.transientUserData.getCurrentBytes()) {
                        parseFailures++;
                    }
                } catch (FHIRException fe) {
                    throw fe;
                } catch (Exception e) {
                    throw new FHIRException("Unable to read from S3 File", e);
                }
    
                // Condition: The optimized block and the number of Resources read
                // exceed the minimum thresholds or the maximum size of a single resource
                if (tempResources.size() >= maxRead) {
                    LOG.fine("TempResourceSize " + tempResources.size());
                    complete = true;
                }
    
                // Condition: The optimized block is exceeded and the number of resources is
                // only one so we want to threshold a maximum number of resources
                // 512K * 5 segments (we don't want to repeat too much work) = 2.6M
                if (numberOfBytesRead > 2621440 && tempResources.size() >= 1) {
                    complete = true;
                }
    
                // Condition: The maximum read block is exceeded and we have at least one Resource
                // 2147483648 / (256*1024*1024) = 8192 Reads
                if (mux == 8193) {
                    throw new FHIRException("Too Long a Line");
                }
    
                // We've read more than one window
                if (mux > 1 && tempResources.size() >=1) {
                    break;
                }
            }
    
            // Condition: There is no complete resource to read.
            if (totalReads == 0) {
                LOG.warning("File grew since the start");
                this.transientUserData.setCurrentBytes(this.transientUserData.getImportFileSize());
            }
    
            // Add the accumulated resources
            this.resources.addAll(tempResources);
        }
    

    The above code was created and licensed as part of the IBM/FHIR project.

    Net, carefully approach Unicode formats, becareful on reassembling bytes and reading windows from Channels.

  • Moving on…

    In 2019, I joined the IBM FHIR Server team. A team tasked with engineering an internal FHIR server (DSTU2) as an updated and upgrade open source HL7 FHIR R4 Server. The open sourced code, on GitHub IBM® FHIR® Server – IBM/FHIR is a product of many contributors since it’s inception in 2016 (the project history goes back to the DSTU2 days). I contributed over a 1000 commits over my time working on the project, authored over 300 issues, opened-updated-closed 600 plus Pull Requests, and triaged/reviewed and designed many more.

    Today I’m moving on to IBM Power Systems and working on OpenShift.

    Contributions to the Project – Automation, Search, Hard Erase, Performance, Data Storage, Bulk Data

  • GitHub Actions Braindump

    The following are from a braindump I did for my teamn (everything here is public knowledge):

    Getting Setup to Building and Developing with the Workflows

    This section outlines setting up your development environment for working with workflows:

    1. Download the Visual Code.  This tool is best to sit outside of your environment.
    2. Click Extensions > Search for PowerShell and install the PowerShell. This feature will also install PowerShell local to your system.  PowerShell is used in the Windows workflow.
    3. Install ShellCheck. This feature is used to check your code and make sure you are following best practices when generating the shell scripts.
    4. Add an alias to your terminal settings:

    alias code=’/Applications/Visual\ Studio\ Code.app/Contents/Resources/app/bin/code’

    For me, this is in the .zshrc file.

    You should also have Docker installed (or aliased nerdctl aliased to docker). You should also have IBM Java 11 installed (v11.0.14.1).

    You’ll also need access to:

    Debugging GitHub Actions

    If the GitHub Action fails, you should check the following:

    1. Review the PR Workflow
    1. Click on Details
      1. Select the Job that is failing
    • Click on the Settings > View raw logs
    • Navigate down to the end of the file to the BUILD FAILURE
      • Scroll up from the BUILD FAILURE point (at least in Maven)
      • Look for the failing tests
    • If there are not enough details, go back to Step (b)
      • Click on Summary
    • Scroll down to Artifacts
      • Download the Artifacts related to the build. These files are kept for 90 days or until we hit a storage limit. Note, we purposefully use the upload-artifacts step.
    1. Extract the files and you should have the Docker Console log file, test results any other workflow logging.
    2. IBM/FHIR Repository Actions – You can filter through list of Actions.
      1. Navigate to https://github.com/IBM/FHIR/actions?query=is%3Afailure
      1. You can filter on the failing workflows and get a good big picture.
    3. GitHub Status – You should subscribe to the site’s updates (via email).  This is going to be very helpful to figure out what’s going on with GitHub.  You should also check this when the transient errors appear systemic or non-deterministic – e.g. not failing in the same spot. At least one person on the team should sign up for the GitHub Status.
    4. GitHub Community: Actions – I rarely go here, but sometimes I find that someone has posted with the same question, it may have an answer.  Very rarely, I post directly there.

    Debugging

    If you encounter anything that looks transient – e.g., network download (Wagon Failure), disk space, filesystem failure – you can choose to rerun the failing workflow.

    1. Navigate to the failing Job-Workflow
    2. Click Re-run all jobs
    3. Repeat for all Workflows that failed

    If you see a failure in a particular GitHub Step, such as actions/checkout or actions/setup-java, you should go search that actions issues:

    1. If actions/setup-java is failing, navigate to https://github.com/actions/setup-java
    2. Check the Issues (Search for the issue)
    3. Search for settings that may help

    Note, sometimes they intentionally fail a GitHub Action workflow to signal that you should upgrade or change.

    How to do Local Development with GitHub Actions

    1. Navigate to https://github.com/ibm/fhir
    2. Click Fork
      1. If a Fork already exists, be sure to Fetch Upstream > Fetch and Merge
    • Click Pull Requests and Create the ci-skip label
      • Click Labels
      • Click New Label
      • Click Create label
    • Clone the fork – git clone https://github.com/prb112/FHIR.git
      • I typically create a local folder called prb112 then clone into it.
    • Once the main branch is active, git checkout -b new-ci
    • Open your code using your alias: code ~/dev/fhir
    • You’ll see:
    • Edit your automation files in .github and build
    • Click Terminal > New Terminal
    1. Update the .github/workflows/<file> you are changing so the job.<workflow_job>.if condition is invalid:

    jobs:

      e2e-audit:

        runs-on: ubuntu-latest

        if: “!contains(github.event.pull_request.labels.*.name, ‘ci-skip’)”

    I change ci-skip to ci-skip-ignore so that I can run just that one targeted workflow.

    1. Test the steps locally by executing the steps in the workflow line-by-line in the terminal session.
    2. Once you are comfortable with the changes:
      1. git add <files>
      1. git commit -s -m “My edits for issue #2102010”
    3. Push your new branch – git push -u origin new-ci
    4. Create your new Pull Request targeting the IBM:fhir main branch and add ci-skip.

    The targeted workflow you are building with is the only one that runs. Note, you have a LIMITED number of execution minutes for GitHub Workflows.

    Finding GitHub Actions to use/tips in Use

    There are many folks using GitHub Actions, and many have figured out better patterns (or evolved to have better patterns).

    1. Go here – https://github.com/actions/
    2. Search: <my query> file:yml site:github.com
    Workflow Details

    Each workflow runs in a hosted-runner (or virtual-environment).  These hosted-runners are containers specified declaratively in the workflow:

    FlavorVirtual-Environment
    windowswindows-2019
    All OtherUbuntu2004

    These hosted-runners have a number of pre-installed libraries and tools – Docker, podman, java-11, jq, perl, python, yarn et cetra.

    These workflows (except the site, javadocs and release) follow a similar pattern:

    1. Setup Prerequisites
    2. Run the Pre-integration Steps
    3. Execute the Tests
    4. Run the Post Integration Steps
    5. Archive the Results

    This pattern evolved from build.yml and integration.yml as the starting point all the way to the most recent migration.yaml. Migration.yml is the most sophisticated workflow-jobs that are created.

  • Using the IBM FHIR Server and Implementation Guide as Java Modules

    The IBM FHIR Server is an extensible HL7 FHIR Server. The IBM FHIR server supports complicated ImplementationGuides (IGs), a set of rules of how a particular problem is solved using FHIR Resources. The implementation guides include a set of Profiles, ValueSets, CodeSystems and supporting resources (Examples, CapabilityStatements).

    The IBM FHIR Server supports the loading of NPM packages – stored in the (package.tgz). You see the package at the https://www.hl7.org/fhir/us/core/package.tgz (One just appends package.tgz to any IG site).

    The IBM FHIR Server includes a number of IGs built-tested-released with each tag.

    The IGs are Java modules which are specially formed to support the resources in the NPM packages. The Java modules use a ServiceLoader (activated at startup when the Java Module is in the classpath).

    The best way to start is to copy and existing fhir-ig, such as fhir-ig-us-core, and modify as needed (package rename and updated files).

    The service loader uses the META-INF/services/com.ibm.fhir.registry.spi.FHIRRegistryResourceProvider interface to activate the list of classes in the file.

    Each of the corresponding classes need to be in src/main/java under the corresponding package (com.ibm.fhir.ig.us.core as above).

    The PackageRegistryResourceProvider navigates the src/main/resources to find the folder hl7/fhir/us/core/311 and loads the files referenced in the NPM packages index (.index.json).

    You might not see the .index.json file by default in Eclipse, and you should unhide the .resource file at the Enterprise Explorer View > Filters and Customization, Select .*resources, Click OK

    When you open .index.json, you should see:

    These are the resources which will be loaded when the fhir-ig-us-core is added to the userlib folder.

    The US-Core and CarinBB support multiple versions of the same IG – v3.1.1 and v4.0.0 in the same Java module. To control this behavior, one needs to set the configuration in order to map to a default version (the default is always the latest or newest version).  With cross IG dependencies, they are updated to point to the correct one, or the latest one as the IG specifies.

    To make these files viewable, we do like to format the contents of these folders so they are pretty JSON. When the IGs are built and released, the JSON files are compressed and saves a good chunk of Memory. 

    We also like to remove Narrative texts:

    "text": {
         "status": "empty",
         "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\">Redacted for size</div>"
    },

    There is an examples folder included with most package.tgz files.  You should copy this into the src/test/resources/JSON folder and update the index.txt so you have the latest examples in place, and these will be loaded the ProfileTest (ConformaceTest is used for usecase specific tests).  The below is an example of loading the 400/index.txt and failing when issues exceed a limit or a severity of error.

    public class ProfileTest {
    
        private static final String INDEX = "./src/test/resources/JSON/400/index.txt";
    
        private String path = null;
    
        public ProfileTest() {
            // No Operation
        }
    
        public ProfileTest(String path) {
            this.path = path;
        }
    
        @Test
        public void testUSCoreValidation() throws Exception {
            try (Reader r = Files.newBufferedReader(Paths.get(path))) {
                Resource resource = FHIRParser.parser(Format.JSON).parse(r);
                List<Issue> issues = FHIRValidator.validator().validate(resource);
                issues.forEach(item -> {
                    if (item.getSeverity().getValue().equals("error")) {
                        System.out.println(path + " " + item);
                    }
                });
                assertEquals(countErrors(issues), 0);
            } catch (Exception e) {
                System.out.println("Exception with " + path);
                fail(e.toString());
            }
        }
    
        @Factory
        public Object[] createInstances() {
            List<Object> result = new ArrayList<>();
    
            try (BufferedReader br = Files.newBufferedReader(Paths.get(INDEX))) {
                String line;
                while ((line = br.readLine()) != null) {
                    result.add(new ProfileTest(line));
                }
            } catch (IOException e) {
                e.printStackTrace();
            }
            return result.toArray();
        }
    
    }

    The ProviderTest checks the provider loads successfully, and that the number of resources returned is correct, and tests one exemplary resource is returned.

    @Test
        public void testRegistry() {
            StructureDefinition definition = FHIRRegistry.getInstance()
                    .getResource("http://hl7.org/fhir/us/core/StructureDefinition/pediatric-bmi-for-age", StructureDefinition.class);
            assertNotNull(definition);
            assertEquals(definition.getVersion().getValue(), "4.0.0");
        }
    
        @Test
        public void testUSCoreResourceProvider() {
            FHIRRegistryResourceProvider provider = new USCore400ResourceProvider();
            assertEquals(provider.getRegistryResources().size(), 148);
        }

    There is a very important test which is testConstraintGenerator any issue in the structure definition will be reported when the Constraints are compiled, and you’ll get some really good warnings.

        @Test
        public static void testConstraintGenerator() throws Exception {
            FHIRRegistryResourceProvider provider = new USCore400ResourceProvider();
            for (FHIRRegistryResource registryResource : provider.getRegistryResources()) {
                if (StructureDefinition.class.equals(registryResource.getResourceType())) {
                    assertEquals(registryResource.getVersion().toString(), "4.0.0");
                    String url = registryResource.getUrl() + "|4.0.0";
                    System.out.println(url);
                    Class<?> type = ModelSupport.isResourceType(registryResource.getType()) ? ModelSupport.getResourceType(registryResource.getType()) : Extension.class;
                    for (Constraint constraint : ProfileSupport.getConstraints(url, type)) {
                        System.out.println("    " + constraint);
                        if (!Constraint.LOCATION_BASE.equals(constraint.location())) {
                            compile(constraint.location());
                        }
                        compile(constraint.expression());
                    }
                }
            }
        }

    For example, you might get a pediatric BMI set of constraints:

    http://hl7.org/fhir/us/core/StructureDefinition/pediatric-bmi-for-age|4.0.0
        @com.ibm.fhir.model.annotation.Constraint(id="vs-1", level="Rule", location="Observation.effective", description="if Observation.effective[x] is dateTime and has a value then that value shall be precise to the day", expression="($this as dateTime).toString().length() >= 8", source="http://hl7.org/fhir/StructureDefinition/vitalsigns", modelChecked=false, generated=false, validatorClass=interface com.ibm.fhir.model.annotation.Constraint$FHIRPathConstraintValidator)
        @com.ibm.fhir.model.annotation.Constraint(id="vs-2", level="Rule", location="(base)", description="If there is no component or hasMember element then either a value[x] or a data absent reason must be present.", expression="(component.empty() and hasMember.empty()) implies (dataAbsentReason.exists() or value.exists())", source="http://hl7.org/fhir/StructureDefinition/vitalsigns", modelChecked=false, generated=false, validatorClass=interface com.ibm.fhir.model.annotation.Constraint$FHIRPathConstraintValidator)

    The above constraints can be tested using the FHIRPath expression against a failing test resource, and confirm the validity of the StructureDefinition.

    I hope this document help you build your own IGs for the IBM FHIR Server.

  • Connectathon 29: IBM FHIR Server and the Bulk Data Track

    I recently attended the HL7 FHIR Connectathon 29. For those that are not familiar with Connectathons, I think they are fairly unique events featuring standards enthusiasts, vendors and implementors doing hands-on standards development (FHIR) and testing. As an attendee I picked one of the tracksbulk data.

    The bulk data track tests the FHIR Bulk Data Access Implementation Guide (IG) – v2.0.0 STU2. For those unfamiliar with the standards process, STU refers to the level of maturity of the specification. This maturity aligns well with the associated ANSI certification process where the highest level is normative where the "content is considered to be stable and has been ‘locked’". Connectathons test interoperablity and the standards and make the normative/locked in version even more robust.

    This particular part of the spec (IG) provides for "efficient access large volumes of information on a group of individuals". Instead of making 100s of 1000s of individual requests, the IG defines an efficient asynchronous process for aggregating the relevant healthcare care data into flat files. These flat files are in the NDJSON format, such as:

    {"resourceType":"Patient","name":[{"family":"Doe","given":["John"]}],"birthDate":"1970-01-01"}
    {"resourceType":"Patient","name":[{"family":"Doe","given":["Jane"]}],"birthDate":"1960-01-01"}
    

    For the IBM FHIR Server team, I brought our own server to the Connectathon to test one scenario Scenario 2: Bulk data export with retrieval of referenced files on a protected endpoint. Our team stood up an IBM FHIR Server deployment using Kubernetes and Helm and configured with SMART Backend Services Authorization and IBM Cloud Object Storage, an S3 compatible service.

    This blog outlines the recipe to setup the IBM FHIR Server with SMART Backend Services Authorization with Bulk Data. The recipe shows how to side load data into the environment.

    1. Setup Prerequisites

    In order to complete this setup, you need to setup kubernetes and helm. For my case, I chose to install the ibmcloud tool as it hosts my Kubernetes deployment.

    1. Install the tools:

    2. Install the plugins for ibmcloud

    When deploying the IBM FHIR Server Edition, you’ll need a few additional plugins than the IBM Cloud default: cloud-object-storage, kubernetes-service, container-registry and the infrastructure-service.

    ibmcloud plugin repo-plugins -r "IBM Cloud"
    ibmcloud plugin install cloud-object-storage -f
    ibmcloud plugin install container-service
    ibmcloud plugin install container-registry -f
    ibmcloud plugin install infrastructure-service -f
    

    2. Setting up the S3 Bucket with HMAC

    The test environment uses bulk data with presigned urls to store the bulk exported data.

    1. Login with an API Key (much easier if you use SSO)
    API_KEY=$(cat cloudpak.json | jq -r .apiKey)
    ibmcloud login --apikey ${API_KEY} -r us-east
    
    1. Create a Cloud Object Storage Instance, if it does not exist.
    ibmcloud resource service-instance-create \
        my-bulk-data \
        cloud-object-storage standard global
    CRN=$(ibmcloud resource service-instance \
        my-bulk-data --output JSON | jq -r '.[].crn')
    ibmcloud cos config crn --crn "${CRN}"
    ibmcloud cos create-bucket --bucket \
        "fhir-bulk-data"
    ibmcloud resource service-key-create \
        test-user-hmac Writer --instance-id "${CRN}" \
        --parameters '{"HMAC":true}' --output JSON
    
    1. You’ll see a JSON output with cos_hmac_keys save this for later.
    "cos_hmac_keys": {
        "access_key_id": "abcdefgh",
        "secret_access_key": "xyzmnopq"
    |
    

    The details of the environment can be output:

    ibmcloud resource service-instance \
        my-bulk-data --output JSON
    
    1. Check the endpoints
    curl https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints -o endpoints.json 
    
    1. In the endpoints.json, find the internalUrl (the private) and the externalUrl (the direct) for the location of your Cloud Object Storage, and record it along with the region. Note, I used a regional COS instance.
    ...
        "regional": {
          "us-south": {
            "public": {
              "us-south": "s3.us-south.cloud-object-storage.appdomain.cloud"
            },
            "private": {
              "us-south": "s3.private.us-south.cloud-object-storage.appdomain.cloud"
            },
            "direct": {
              "us-south": "s3.direct.us-south.cloud-object-storage.appdomain.cloud"
            }
          }
    ...
    
    1. Here is a table for your reference:
    Name Value
    bucketname fhir-bulk-data
    accessKey abcdefgh
    secretKey xyzmnopq
    region us-south
    internalUrl s3.private.us-south.cloud-object-storage.appdomain.cloud
    externalUrl s3.direct.us-south.cloud-object-storage.appdomain.cloud

    3. Create the Cluster

    VPC_ID=$(ibmcloud ks vpcs --provider vpc-gen2 --output json \
        | jq -r .[].id)
    SUBNET_ID=$(ibmcloud ks subnets --provider vpc-gen2 \
        --vpc-id ${VPC_ID} --zone us-east-1 --output json \
        | jq -r '.[].id')
    ibmcloud oc cluster create vpc-gen2 \
        --name demo --flavor bx2.4x16 \
            --version 1.23.3 \
            --cos-instance ${CRN} \
            --service-subnet 172.21.0.0/16 --pod-subnet 172.17.64.0/18 \
            --workers 3 --zone us-east-1 --vpc-id=${VPC_ID} \
            --subnet-id ${SUBNET_ID}
    

    The IBM Cloud Kubernetes Service has comprehensive documentation at link

    If you have questions about which version to check, you can refere to ibmcloud ks versions or the docs.

    Once your cluster is up and operation, where you can login to the Administration console, you are ready to target your deployment to the Cluster.

    4. Build and Push the latest from IBM FHIR Server main

    Since there are features in that impact Bulk Data support in main, it’s best to push the latest to a docker registry, and pull the latest into your enviornment.

    1. Clone the IBM FHIR Server repository and switch to the cloned repository.
    git clone https://github.com/IBM/FHIR.git && cd $(basename $_ .git)
    
    1. Setup the examples
    mvn clean install -f fhir-examples -DskipTests
    
    1. Build the fhir projects
    mvn clean install -f fhir-parent -DskipTests
    
    1. Build the IBM FHIR Server
    export BUILD_ID=4.11.0-SNAPSHOT
    nerdctl build fhir-install -t prb112/ibm-fhir-server:latest
    nerdctl login docker.io
    nerdctl push docker.io/prb112/ibm-fhir-server:latest
    

    Now you have the IBM FHIR Server with the latest deployed to a public registry, note, you can always update to work off a private registry using a custom pull secret.

    5. Use Helm to deploy the IBM FHIR Server Helm for Smart-on-FHIR access

    This helm chart is very comprehensive and includes – Postgres as a Subchart and keycloak with its own Postgres.

    1. Add the Helm Chart
    helm repo add alvearie https://alvearie.io/alvearie-helm
    
    1. Update the Helm Chart
    $ helm repo update alvearie
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "alvearie" chart repository
    Update Complete. ⎈Happy Helming!⎈
    
    1. Create a Postgres Password, and save this locally.
    export POSTGRES_PASSWORD=$(openssl rand -hex 20)
    echo $POSTGRES_PASSWORD
    
    1. Configure your kubectl for the target cluster
    ibmcloud ks cluster config --cluster demo
    

    You see:

    OK
    The configuration for demo was downloaded successfully.
    
    Added context for m to the current kubeconfig file.
    You can now execute 'kubectl' commands against your cluster. For example, run 'kubectl get nodes'.
    If you are accessing the cluster for the first time, 'kubectl' commands might fail for a few seconds while RBAC synchronizes.
    
    1. Create a namespace for the target deployment.
    kubectl create namespace example
    namespace/example created
    
    1. Setup the TLS Secret on the ibm-provided domain docs
    ibmcloud ks cluster get --cluster demo --output JSON | jq .ingress
    

    The output is

    {
      "hostname": "demo-12345-0000.us-east.containers.appdomain.cloud",
      "secretName": "demo-1235-0000",
      "status": "healthy",
      "message": "All Ingress components are healthy"
    }
    

    Copy the secret from the default namespace to the new namespace example

    kubectl get secret -n default demo-1235-0000 -o yaml | sed 's/namespace: .*/namespace: example/' | kubectl apply -n example -f  -
    secret/demo-1235-0000 created
    

    Save the hostname, and secretName for later.

    1. Setup the Secret for the IBM Cloud container registry docs
    kubectl get secret -n default all-icr-io -o yaml | sed 's/namespace: .*/namespace: example/' | kubectl apply -n example -f -
    secret/all-icr-io created
    

    Note, this secret is provided in the ibmcloud registry.

    1. Create an overrides file – values-example.yml and update the following values:
    Value to Update Value to replace below Notes
    image.repository REPLACE_WITH_YOUR_REPO The location of the repository / image – docker.io/prb112/ibm-fhir-server or the recommended default for this case – quay.io/alvearie/fhir-data-access
    postgresql.postgresqlPassword REPLACE_WITH_YOUR_POSTGRES_PASSWORD The Postgres Password you generated
    keycloak.postgresql.postgresqlPassword REPLACE_WITH_YOUR_POSTGRES_PASSWORD The Postgres Password you generated
    keycloak.adminPassword REPLACE_WITH_YOUR_ADMIN_PASSWORD You can pick thisk password
    objectStorage.location REPLACE_WITH_COS_REGION This is the COS Bucket’s region
    objectStorage.endpointUrl REPLACE_WITH_COS_ENDPOINT_URL This is the COS endpointURL (the direct one) and is prefixed with https
    objectStorage.accessKey REPLACE_WITH_ACCESS_KEY The COS HMAC accessKey you created
    objectStorage.secretKey REPLACE_WITH_SECRET_KEY The COS HMAC secretKey you created
    objectStorage.bulkDataBucketName REPLACE_WITH_BUCKET_NAME The Bucket you previously created
    ingress.secretName REPLACE_SECRET_NAME_FOR_TLS The secret name for TLS demo-1235-0000 from above.
    ingress.hostname REPLACE_WITH_INGRESS_HOSTNAME The hostname recorded from above demo-12345-0000.us-east.containers.appdomain.cloud
    image:
      repository: REPLACE_WITH_YOUR_REPO
      tag: latest
      pullPolicy: Always
    ingress:
      hostname: "{{ $.Release.Namespace }}.REPLACE_WITH_INGRESS_HOSTNAME"
      tls:
        - secretName: REPLACE_SECRET_NAME_FOR_TLS
      annotations:
        nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    traceSpec: >-
      com.ibm.fhir.smart.*=fine:com.ibm.fhir.server.*=fine
    postgresql:
      enabled: true
      postgresqlPassword: REPLACE_WITH_YOUR_POSTGRES_PASSWORD
      nameOverride: postgres
    security:
      jwtValidation:
        enabled: true
      oauth:
        enabled: true
        regUrl: "https://{{ tpl $.Values.ingress.hostname $ }}/auth/realms/test/clients-registrations/openid-connect"
        authUrl: "https://{{ tpl $.Values.ingress.hostname $ }}/auth/realms/test/protocol/openid-connect/auth"
        tokenUrl: "https://{{ tpl $.Values.ingress.hostname $ }}/auth/realms/test/protocol/openid-connect/token"
        smart:
          enabled: true
          resourceScopes:
            - "patient/*.read"
            - "patient/AllergyIntolerance.read"
            - "patient/CarePlan.read"
            - "patient/CareTeam.read"
            - "patient/Condition.read"
            - "patient/Device.read"
            - "patient/DiagnosticReport.read"
            - "patient/DocumentReference.read"
            - "patient/Encounter.read"
            - "patient/ExplanationOfBenefit.read"
            - "patient/Goal.read"
            - "patient/Immunization.read"
            - "patient/Location.read"
            - "patient/Medication.read"
            - "patient/MedicationRequest.read"
            - "patient/MedicationDispense.read"
            - "patient/Observation.read"
            - "patient/Organization.read"
            - "patient/Patient.read"
            - "patient/Practitioner.read"
            - "patient/PractitionerRole.read"
            - "patient/Procedure.read"
            - "patient/Provenance.read"
            - "patient/RelatedPerson.read"
            - "system/*.read"
            - "system/AllergyIntolerance.read"
            - "system/CarePlan.read"
            - "system/CareTeam.read"
            - "system/Condition.read"
            - "system/Device.read"
            - "system/DiagnosticReport.read"
            - "system/DocumentReference.read"
            - "system/Encounter.read"
            - "system/ExplanationOfBenefit.read"
            - "system/Goal.read"
            - "system/Immunization.read"
            - "system/Location.read"
            - "system/Medication.read"
            - "system/MedicationRequest.read"
            - "system/MedicationDispense.read"
            - "system/Observation.read"
            - "system/Organization.read"
            - "system/Patient.read"
            - "system/Practitioner.read"
            - "system/PractitionerRole.read"
            - "system/Procedure.read"
            - "system/Provenance.read"
            - "system/RelatedPerson.read"
    keycloak:
      enabled: true
      adminUsername: admin
      adminPassword: REPLACE_WITH_YOUR_ADMIN_PASSWORD
      config:
        enabled: true
        realms:
          test:
            clients:
              inferno:
                consentRequired: true
                publicClient: true
                redirectURIs:
                  - "http://localhost:4567/inferno/*"
                defaultScopes: []
                optionalScopes:
                  - "patient/*.read"
                  - "patient/AllergyIntolerance.read"
                  - "patient/CarePlan.read"
                  - "patient/CareTeam.read"
                  - "patient/Condition.read"
                  - "patient/Device.read"
                  - "patient/DiagnosticReport.read"
                  - "patient/DocumentReference.read"
                  - "patient/Encounter.read"
                  - "patient/ExplanationOfBenefit.read"
                  - "patient/Goal.read"
                  - "patient/Immunization.read"
                  - "patient/Location.read"
                  - "patient/Medication.read"
                  - "patient/MedicationRequest.read"
                  - "patient/MedicationDispense.read"
                  - "patient/Observation.read"
                  - "patient/Organization.read"
                  - "patient/Patient.read"
                  - "patient/Practitioner.read"
                  - "patient/PractitionerRole.read"
                  - "patient/Procedure.read"
                  - "patient/Provenance.read"
                  - "patient/RelatedPerson.read"
              infernoBulk:
                consentRequired: false
                publicClient: false
                standardFlowEnabled: false
                serviceAccountsEnabled: true
                clientAuthenticatorType: client-jwt
                defaultScopes: []
                optionalScopes:
                  - "system/*.read"
                  - "system/AllergyIntolerance.read"
                  - "system/CarePlan.read"
                  - "system/CareTeam.read"
                  - "system/Condition.read"
                  - "system/Device.read"
                  - "system/DiagnosticReport.read"
                  - "system/DocumentReference.read"
                  - "system/Encounter.read"
                  - "system/ExplanationOfBenefit.read"
                  - "system/Goal.read"
                  - "system/Immunization.read"
                  - "system/Location.read"
                  - "system/Medication.read"
                  - "system/MedicationDispense.read"
                  - "system/MedicationRequest.read"
                  - "system/Observation.read"
                  - "system/Organization.read"
                  - "system/Patient.read"
                  - "system/Practitioner.read"
                  - "system/PractitionerRole.read"
                  - "system/Procedure.read"
                  - "system/Provenance.read"
                  - "system/RelatedPerson.read"
      ingress:
        enabled: true
        rules:
          - host: "{{ $.Release.Namespace }}.REPLACE_WITH_INGRESS_HOSTNAME"
            paths:
              - path: /auth
                pathType: Prefix
        servicePort: https
        tls:
          - secretName: REPLACE_SECRET_NAME_FOR_TLS
        annotations:
          nginx.ingress.kubernetes.io/server-snippet: |
            add_header Strict-Transport-Security "max-age=86400; includeSubDomains";
          nginx.ingress.kubernetes.io/backend-protocol: HTTPS
          nginx.ingress.kubernetes.io/proxy-buffer-size: "64k"
          nginx.ingress.kubernetes.io/proxy-ssl-protocols: TLSv1.2 TLSv1.3
      postgresql:
        postgresqlPassword: REPLACE_WITH_YOUR_POSTGRES_PASSWORD
    objectStorage:
      enabled: true
      location: REPLACE_WITH_COS_REGION
      endpointUrl: https://REPLACE_WITH_COS_ENDPOINT_URL
      accessKey: REPLACE_WITH_ACCESS_KEY
      secretKey: REPLACE_WITH_SECRET_KEY
      bulkDataBucketName: REPLACE_WITH_BUCKET_NAME
      batchIdEncryptionKey:
    

    The above configuration enables READ only system scopes.

    1. Upgrade and install
    helm upgrade --install ibm-fhir-server alvearie/ibm-fhir-server -f values-pentest.yaml --namespace=example 
    

    Note, helm outputs the fhiruser password and ingress.hostname, save this for later.

    1. Watch the pods until the pods are up in the Running state.
    kubectl -n pentest get pods -w
    NAME                               READY   STATUS     RESTARTS   AGE
    ibm-fhir-server-7557689c57-mq7zr   0/1     Init:0/1   0          53s
    ibm-fhir-server-7557689c57-tfjcl   0/1     Init:0/1   0          54s
    ibm-fhir-server-postgres-0         0/1     Pending    0          54s
    ibm-fhir-server-schematool-g2wq5   0/1     Init:0/1   0          54s
    

    Then it looks like and wait for the ibm-fhir-server is Running.

    ibm-fhir-server-7557689c57-mq7zr   0/1     Init:0/1   0          53s
    ibm-fhir-server-7557689c57-tfjcl   0/1     Init:0/1   0          54s
    ibm-fhir-server-postgres-0         0/1     Pending    0          54s
    ibm-fhir-server-schematool-g2wq5   0/1     Init:0/1   0          54s
    ibm-fhir-server-postgres-0         0/1     Pending    0          73s
    ibm-fhir-server-postgres-0         0/1     ContainerCreating   0          73s
    ibm-fhir-server-postgres-0         0/1     ContainerCreating   0          2m19s
    ibm-fhir-server-postgres-0         0/1     Running             0          2m20s
    ibm-fhir-server-postgres-0         1/1     Running             0          2m33s
    ibm-fhir-server-7557689c57-mq7zr   0/1     PodInitializing     0          2m42s
    ibm-fhir-server-7557689c57-mq7zr   0/1     Running             0          2m43s
    ibm-fhir-server-7557689c57-tfjcl   0/1     PodInitializing     0          2m44s
    ibm-fhir-server-schematool-g2wq5   0/1     PodInitializing     0          2m44s
    ibm-fhir-server-7557689c57-tfjcl   0/1     Running             0          2m45s
    ibm-fhir-server-schematool-g2wq5   1/1     Running             0          2m45s
    ibm-fhir-server-schematool-g2wq5   0/1     Completed           0          3m48s
    ibm-fhir-server-schematool-g2wq5   0/1     Completed           0          3m49s
    ibm-fhir-server-7557689c57-mq7zr   1/1     Running             0          3m50s
    ibm-fhir-server-7557689c57-tfjcl   1/1     Running             0          3m51s
    
    1. Check $healthcheck
    curl -i -u 'fhiruser:REPLACE_WITH_PASSWORD' 'https://REPLACE_WITH_BASE_URL.containers.appdomain.cloud/fhir-server/api/v4/$healthcheck' -v
    
    < HTTP/2 200 
    HTTP/2 200 
    < date: Wed, 19 Jan 2022 16:25:27 GMT
    date: Wed, 19 Jan 2022 16:25:27 GMT
    < content-length: 0
    content-length: 0
    < content-language: en-US
    content-language: en-US
    < strict-transport-security: max-age=15724800; includeSubDomains
    strict-transport-security: max-age=15724800; includeSubDomains
    

    6. Login to Keycloak

    Keycloak provides the authentication and authorization service for IBM FHIR Server’s implementation of Smart-on-FHIR.

    1. Sign in to the Keycloak Console https://REPLACE_WITH_BASE_URL/auth/ using the keycloak.admin as the user and the keycloak.adminPassword for the password.

    2. You are in the Test Realm, Click Clients > infernoBulk

    3. Select Use JWKS, enter https://bulk-data.smarthealthit.org/keys/RS384.public.json – note this key is only for testing.

    {
        "keys": [
            {
                "kty": "RSA",
                "alg": "RS384",
                "n": "<<REDACTED>>",
                "e": "AQAB",
                "key_ops": [
                    "verify"
                ],
                "use": "sig",
                "ext": true,
                "kid": "6cf70879258f9c656bb7ccc65802d099"
            }
        ]
    }
    
    1. Click Import

    2. Click Client Scopes. Under Optional Client Scopes, if any are specified as system/, Add selected.

    system/*.read
    system/AllergyIntolerance.read
    system/CarePlan.read
    system/CareTeam.read
    system/Condition.read
    system/Device.read
    system/DiagnosticReport.read
    system/DocumentReference.read
    system/Encounter.read
    system/ExplanationOfBenefit.read
    system/Goal.read
    system/Immunization.read
    system/Location.read
    system/Medication.read
    system/MedicationDispense.read
    system/MedicationRequest.read
    system/Observation.read
    system/Organization.read
    system/Patient.read
    system/Practitioner.read
    system/PractitionerRole.read
    system/Procedure.read
    system/Provenance.read
    system/RelatedPerson.read
    
    1. Click Service Account. If this is blank, it should prompt you to create the Service Account user.

    2. For Service-account-infernobulk, Click Groups

    3. Search available groups for /fhirUser and add the /fhirUser to the GroupMembership

    You now have a Service Account for SMART Backend Services Authorization for BulkData usage.

    7. Side Loading Data

    To sideload data, you can use a custom datasource and fhir-server-config.json, and startup a new container from the ibmcom/ibm-fhir-server image with kubectl installed with ibmcloud tools.

    1. Start up the container
    nerdctl run -p 9443:9443 --name fhir -e BOOTSTRAP_DB=true ibmcom/ibm-fhir-server
    docker.io/ibmcom/ibm-fhir-server:latest
    
    1. You then port-forward to the Kubernetes cluster’s postgres from the container
    kubectl port-forward --namespace=example service/ibm-fhir-server-postgres 5432:5432
    
    <server>
        <!-- ============================================================== -->
        <!-- TENANT: default; DSID: default; TYPE: read-write               -->
        <!-- ============================================================== -->
        <dataSource id="fhirDefaultDefault" jndiName="jdbc/fhir_default_default" type="javax.sql.XADataSource" statementCacheSize="200" syncQueryTimeoutWithTransactionTimeout="true" validationTimeout="30s">
            <jdbcDriver javax.sql.XADataSource="org.postgresql.xa.PGXADataSource" libraryRef="sharedLibPostgres"/>
            <properties.postgresql
                 serverName="localhost"
                 portNumber="5432"
                 databaseName="fhir"
                 user="postgres"
                 password="REPLACE_WITH_YOUR_POSTGRES_PASSWORD"
                 currentSchema="fhirdata"
             />
            <connectionManager maxPoolSize="200" minPoolSize="40"/>
        </dataSource>
    </server>
    
    1. Download the Patient bundle
    curl -L https://raw.githubusercontent.com/IBM/FHIR/main/fhir-server-test/src/test/resources/testdata/everything-operation/Antonia30_Acosta403.json -o Antonia30_Acosta403.json
    
    1. Check the Patient
    curl -u 'fhiruser:change-password' 'https://localhost:9443/fhir-server/api/v4/Patient?_format=application/json&_page=1&_sort=-_lastUpdated'
    

    You should see a single _count is 1 where a patient is now loaded, and now ready for more comprehensive testing.

    A test using the RS384 Key from SMART Health IT and uses the bulk data client to test the environment.

    Summary

    You have learned more about Connecathon and SMART Health IT with Backend Authorization.

    Further information on testing is available at https://bastide.org/2022/01/14/bulk-data-using-the-smart-on-fhir-bulk-data-client-to-test-export/

    Trackers/Issues

    A lot of interesting points were raised at the Connectathon, and the IBM Team identified a number of issues:

    1. AccessTokens should not be set with Presigned URLs #3188
    2. Support BulkData with Expires Header #3185
    3. Scope warning message for $export is confusing #3182
    4. $import allows adding Resources of multiple types in the same ndjson which could include unsupported resources. #3180
    5. fhir-smart Patient/$export assumes no _type filtering leading #3179
    6. Support subsetting exported resources based on implied SMART-on-FHIR scopes #3177
    7. Support associating a serviceAccount user with a particular group #33

    And a few which we opened with the bulk data client team:

    1. Bearer token is expected to be capitalized Bearer. #1
    2. User-Agent string is awkward #3
    3. Output doesn’t give a lot of details on what resourceType was exported #5

    And one we’re watching:

    1. Provides token even if requiresAccessToken is false #2

    And a few which we’ve had on the plan for a while:

    1. BulkData 2.0.0: _type query parameter’s cardinality is relaxed #3081
    2. Bulk Data Export 2.0.0: Support the bulkdata patient parameter #1719

    We’re also monitoring this issue:

    1. Provides token even if requiresAccessToken is false #2

    I’m looking forward to the next Connectathon and working with you all.

    Links

    These links are handy for anyone starting out:

    Resource Link
    ArtifactHub: IBM FHIR Server Helm Chart https://bit.ly/3qUgHiH
    GitHub – Helm Chart https://bit.ly/3eSKQcC
    GitHub – IBM FHIR Server https://bit.ly/3G4iEj5
    GitHub – IBM FHIR Server Documentation https://bit.ly/3eW5tok
    DockerHub: ibmcom/ibm-fhir-server https://hub.docker.com/r/ibmcom/ibm-fhir-server
    DockerHub: ibmcom/ibm-fhir-schematool https://hub.docker.com/r/ibmcom/ibm-fhir-schematool
  • Tips for IBM Cloud and running IBM FHIR Server

    Here are my tips/setup for the IBM FHIR. I hope they help you as you setup your environment.

    1. Create a variable to prefix the environment resources and the resource-group name.

    The following generates a date that is 14 days in the future, and is in lower case, it’s best to lower case everything in the following case:

    EXPIRY_DATE=$(date -j -v +14d +%Y-%b-%d |tr '[:upper:]' '[:lower:]')
    echo ${EXPIRY_DATE}
    

    The output is like the following:

    2022-mar-07
    
    1. Install the plugins

    When deploying the IBM FHIR Server, you’ll need a few additional plugins than the IBM Cloud default: cloud-object-storage, kubernetes-service, container-registry, cloud-database, event-streams and the infrastructure-service.

    ibmcloud plugin repo-plugins -r "IBM Cloud"
    ibmcloud plugin install cloud-object-storage -f
    ibmcloud plugin install container-service -f
    ibmcloud plugin install container-registry -f
    ibmcloud plugin install cloud-databases -f
    ibmcloud plugin install event-streams -f
    ibmcloud plugin install infrastructure-service -f
    
    1. Login with an API Key (much easier if you use SSO)
    API_KEY=$(cat cloudpak.json | jq -r .apiKey)
    ibmcloud login --apikey ${API_KEY} -r us-east
    
    1. As a first step, you can check to see if there are any exisiting resources in the account:
    # List the Current Databases
    ibmcloud cdb ls --json
    
    # List the Open Shift Cluster
    ibmcloud oc cluster ls --json
    
    # List the Open Shift Cluster or the Event Streams
    ibmcloud resource service-instances
    
    1. Check to see if you have an existing resource-group, if no group exists, create one.
    if ! ibmcloud resource group cloudpak-testing-${EXPIRY_DATE}
    then
        ibmcloud resource group-create 'cloudpak-testing'-${EXPIRY_DATE}
    fi
    
    1. Create a Cloud Object Storage Instance, if it does not exist.
    if ! ibmcloud resource service-instance cloudpak-testing-cos-${EXPIRY_DATE}
    then
        ibmcloud resource service-instance-create \
            cloudpak-testing-cos-${EXPIRY_DATE} \
            cloud-object-storage standard global \
        -g 'cloudpak-testing'-${EXPIRY_DATE}
        CRN=$(ibmcloud resource service-instance \
            cloudpak-testing-cos-${EXPIRY_DATE} \
            --output JSON | jq -r '.[].crn')
        ibmcloud cos config crn --crn "${CRN}"
        ibmcloud cos create-bucket --bucket \
            "fhir-cloudpak-testing-${EXPIRY_DATE}"
        ibmcloud resource service-key-create \
            test-user-hmac Writer --instance-id "${CRN}" \
            --parameters '{"HMAC":true}'
        ibmcloud resource service-key-create test-user-iam Writer \
            --instance-id "${CRN}" --parameters '{"HMAC":false}'
    fi
    

    Note, this creates an IAM and HMAC login user. The IBM FHIR Server team prefers the HMAC as it enables the use of presigned urls.

    1. Create an Event Streams instance, if it does not exist.
    if ! ibmcloud resource service-instance cloudpak-testing-es-${EXPIRY_DATE}
    then
        ibmcloud resource service-instance-create \
            cloudpak-testing-es-${EXPIRY_DATE} messagehub standard \
            us-east -g 'cloudpak-testing'-${EXPIRY_DATE}
        ibmcloud resource service-key-create service_manager Manager \
            --instance-name cloudpak-testing-es-${EXPIRY_DATE}
        ibmcloud es init -i cloudpak-testing-es-${EXPIRY_DATE}
        ibmcloud es topic-create --name FHIR_AUDIT --partitions 3
        ibmcloud es topic-create --name FHIR_NOTIFICATIONS --partitions 3
    fi
    
    1. Create a Db2 Instance, if it does not exist.
    if ! ibmcloud resource service-instance cloudpak-testing-db2-${EXPIRY_DATE}
    then
        ibmcloud resource service-instance-create \
            cloudpak-testing-db2-${EXPIRY_DATE} \
            dashdb-for-transactions standard us-east \
            -g 'cloudpak-testing'-${EXPIRY_DATE} -p '{
                "datacenter": "us-south:washington d.c",
                "high_availability": "no",
                "key_protect_instance": "none",
                "key_protect_key": "none",
                "oracle_compatibility": "no",
                "service-endpoints": "public-and-private"
            }'
    fi
    

    Note, there are some manual steps to complete the db2 setup.

    1. Create a postgres instance
    if ! ibmcloud resource service-instance cloudpak-testing-postgres-${EXPIRY_DATE}
    then
        ibmcloud resource service-instance-create \
            cloudpak-testing-postgres-${EXPIRY_DATE} \
            databases-for-postgresql standard us-east \
            -g 'cloudpak-testing'-${EXPIRY_DATE} \
            -p '{"service-endpoints": "public-and-private"}'
    fi
    

    Note, there are some manual steps to complete the postgres setup.

    1. Create the OpenShift Cluster. The CRN is from the prior creation of the COS instance.
    if [ $(ibmcloud oc cluster ls --provider vpc-gen2 --output json \
            | jq -r .[].name | grep -c cloudpak-testing) = 0 ]
    then
        VPC_ID=$(ibmcloud ks vpcs --provider vpc-gen2 --output json \
                    | jq -r .[].id)
        SUBNET_ID=$(ibmcloud ks subnets --provider vpc-gen2 \
            --vpc-id ${VPC_ID} --zone us-east-1 --output json \
                | jq -r '.[].id')
        ibmcloud oc cluster create vpc-gen2 \
            --name cloudpak-${EXPIRY_DATE} --flavor bx2.4x16 \
            --version 4.6_openshift \
            --cos-instance ${CRN} \
            --service-subnet 172.21.0.0/16 --pod-subnet 172.17.64.0/18 \
            --workers 3 --zone us-east-1 --vpc-id=${VPC_ID} \
            --subnet-id ${SUBNET_ID}
    fi
    
    1. Once the postgres instance is up, you can create users – fhiradmin and fhirserver:
    PG_PASSWORD="$(openssl rand -base64 21| base64 | sed 's|=||g' )>"
    echo "Postgres: " ${PG_PASSWORD}
    ibmcloud cdb deployment-user-create \
        cloudpak-testing-postgres-${EXPIRY_DATE} fhiradmin 
    ibmcloud cdb deployment-user-create \
        cloudpak-testing-postgres-${EXPIRY_DATE} fhirserver
    ibmcloud resource service-key-create service_manager \
        --instance-name cloudpak-testing-postgres-${EXPIRY_DATE}
    ibmcloud resource service-keys \
        --instance-name cloudpak-testing-postgres-${EXPIRY_DATE} \
        --output json
    
    1. Using psql, create a fhirserver user for the db:
    psql "host=********.databases.appdomain.cloud port=30794 dbname=ibmclouddb user=admin sslmode=verify-full"
        PGPASSWORD=******
    

    Note, if you don’t have psql in your path, use brew install postgres to get it.

    1. Login with the password from the json PGPASSWORD

    2. Run the following SQL to create the fhirserver user.

    CREATE USER fhirserver WITH LOGIN encrypted password '*****CHANGE*******';
    GRANT CONNECT ON DATABASE ibmclouddb TO fhirserver;
    
    1. Check the postgres configuration, and save locally:
    ibmcloud cdb deployment-connections \
        cloudpak-testing-postgres-${EXPIRY_DATE} --json
    
    1. Setup the necessary max_connections and max_prepared_transactions for postgres
    ibmcloud cdb deployment-configuration \
        cloudpak-testing-postgres-${EXPIRY_DATE} \
        '{"max_connections": 150}'
    sleep 2m
    ibmcloud cdb deployment-configuration \
        cloudpak-testing-postgres-${EXPIRY_DATE} \ 
        '{"max_prepared_transactions": 150}'
    
    1. Create the db2 service-key
    ibmcloud resource service-key-create service_manager \
        Manager --instance-name cloudpak-testing-db2-${EXPIRY_DATE}
    
    1. Login and create fhirserver on the https://cloud.ibm.com

    Your environment is ready to run the IBM offering for IBM FHIR Server along with the supporting resources.

  • Recipe: Getting started with the IBM FHIR Server and Terminology

    The IBM FHIR Server Terminology module fhir-term provides a FHIR terminology service provider interface (SPI) and a default implementation that implements terminology services using CodeSystem, ValueSet, and ConceptMap resources that have been made available through the FHIR registry module fhir-registry.

    This document outlines a small test environment to setup Cassandra and ElasticSearch to run the Terminology and run a simple test.

    Recipe

    1. Clone the IBM FHIR Server repository and switch to the cloned repository.
    git clone https://github.com/IBM/FHIR.git && cd $(basename $_ .git)
    
    1. Setup the examples
    mvn clean install -f fhir-examples -DskipTests
    
    1. Edit line 68 – term/fhir-term-graph/pom.xml, change provided to compile
    <dependency>
                <groupId>org.janusgraph</groupId>
                <artifactId>janusgraph-cql</artifactId>
                <scope>compile</scope>
    
    1. Edit line 174 – term/fhir-term-graph/pom.xml, change provided to compile
            <dependency>
                <groupId>org.janusgraph</groupId>
                <artifactId>janusgraph-es</artifactId>
                <scope>compile</scope>
            </dependency>
    
    1. Setup the fhir projects
    mvn clean install -f fhir-parent -DskipTests
    
    1. Build the IBM FHIR Server
    export WORKSPACE=$(pwd)
    export BUILD_ID=4.11.0-SNAPSHOT
    docker logout
    nerdctl build fhir-install -t ibmcom/ibm-fhir-server:graph
    

    You’ll see:

    Successfully built 196dd54732a4
    Successfully tagged ibmcom/ibm-fhir-server:latest
    
    1. Build the Graph Terminology Loader
    pushd
    cd ${WORKSPACE}/fhir-install/src/main/docker/ibm-fhir-term-graph-loader/
    mkdir -p target/
    cp ${WORKSPACE}/term/fhir-term-graph-loader/target/fhir-term-graph-loader-*-cli.jar target/
    cp ${WORKSPACE}/LICENSE target/
    nerdctl build --build-arg FHIR_VERSION=${BUILD_ID} -t ibmcom/ibm-fhir-term-loader:latest .
    popd
    
    1. Download the janusgraph-cassandra-elasticsearch.properties
    curl -o janusgraph-cassandra-elasticsearch.properties -L \
        https://raw.githubusercontent.com/IBM/FHIR/main/term/fhir-term-graph-loader/src/test/resources/conf/janusgraph-cassandra-elasticsearch.properties
    
    1. Download the fhir-server-config.json
    curl -o fhir-server-config.json -L \
        https://gist.githubusercontent.com/prb112/c08613e6e21e77b92cfc0ea19c56f081/raw/a9e17bb2924b59de14bd04aa9fbbcc8ff38afb10/fhir-server-config-term.json
    
    1. Download the docker-compose.yml
    curl -o docker-compose.yml -L \
        https://gist.githubusercontent.com/prb112/1461e66d28767ba169843bded4b0aad8/raw/a4ed5b59fd8eb430745c173536dc5ab304186c7a/docker-compose.yml
    
    1. Startup the cassandra container
    nerdctl compose up cassandra
    
    1. Check the logs until you see:
    fhir-cassandra |INFO  [main] 2022-02-19 00:36:18,884 CassandraDaemon.java:780 - Startup complete
    
    1. Startup the elasticsearch container
    nerdctl compose up elasticsearch -d
    
    1. Check the logs for "Yellow to Green"
    Cluster health status changed from [YELLOW] to [GREEN]
    
    1. Start the IBM FHIR Server
    nerdctl compose up fhir-server -d
    
    1. Check the logs for smarter planet
    fhir-server_1 |[2/19/22, 0:40:09:547 UTC] 00000026 FeatureManage A   CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 27.170 seconds.
    

    You should also see the schema created:

    fhir-server |[2/19/22, 1:20:11:487 UTC] 00000027 FHIRTermGraph I   Creating schema...
    fhir-server |[2/19/22, 1:20:13:840 UTC] 00000027 FHIRTermGraph I   
    fhir-server |------------------------------------------------------------------------------------------------
    fhir-server |Vertex Label Name              | Partitioned | Static                                             |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |CodeSystem                     | false       | false                                              |
    fhir-server |Concept                        | false       | false                                              |
    fhir-server |Designation                    | false       | false                                              |
    fhir-server |Property_                      | false       | false                                              |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |Edge Label Name                | Directed    | Unidirected | Multiplicity                         |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |isa                            | true        | false       | MULTI                                |
    fhir-server |concept                        | true        | false       | MULTI                                |
    fhir-server |designation                    | true        | false       | MULTI                                |
    fhir-server |property_                      | true        | false       | MULTI                                |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |Property Key Name              | Cardinality | Data Type                                          |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |valueCode                      | SINGLE      | class java.lang.String                             |
    fhir-server |display                        | SINGLE      | class java.lang.String                             |
    fhir-server |version                        | SINGLE      | class java.lang.String                             |
    fhir-server |code                           | SINGLE      | class java.lang.String                             |
    fhir-server |codeLowerCase                  | SINGLE      | class java.lang.String                             |
    fhir-server |url                            | SINGLE      | class java.lang.String                             |
    fhir-server |value                          | SINGLE      | class java.lang.String                             |
    fhir-server |valueBoolean                   | SINGLE      | class java.lang.Boolean                            |
    fhir-server |valueDateTimeLong              | SINGLE      | class java.lang.Long                               |
    fhir-server |valueDecimal                   | SINGLE      | class java.lang.Double                             |
    fhir-server |valueInteger                   | SINGLE      | class java.lang.Integer                            |
    fhir-server |valueString                    | SINGLE      | class java.lang.String                             |
    fhir-server |count                          | SINGLE      | class java.lang.Integer                            |
    fhir-server |group                          | SINGLE      | class java.lang.String                             |
    fhir-server |language                       | SINGLE      | class java.lang.String                             |
    fhir-server |system                         | SINGLE      | class java.lang.String                             |
    fhir-server |use                            | SINGLE      | class java.lang.String                             |
    fhir-server |valueDateTime                  | SINGLE      | class java.lang.String                             |
    fhir-server |valueDecimalString             | SINGLE      | class java.lang.String                             |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |Graph Index (Vertex)           | Type        | Unique    | Backing        | Key:           Status |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |byUrl                          | Composite   | false     | internalindex  | url:          ENABLED |
    fhir-server |byCode                         | Composite   | false     | internalindex  | code:         ENABLED |
    fhir-server |byCodeLowerCase                | Composite   | false     | internalindex  | codeLowerCase:    ENABLED |
    fhir-server |byDisplay                      | Composite   | false     | internalindex  | display:      ENABLED |
    fhir-server |byUrlAndVersion                | Composite   | false     | internalindex  | url:          ENABLED |
    fhir-server |                               |             |           |                | version:      ENABLED |
    fhir-server |byValue                        | Composite   | false     | internalindex  | value:        ENABLED |
    fhir-server |byValueBoolean                 | Composite   | false     | internalindex  | valueBoolean:    ENABLED |
    fhir-server |byValueCode                    | Composite   | false     | internalindex  | valueCode:    ENABLED |
    fhir-server |byValueDateTimeLong            | Composite   | false     | internalindex  | valueDateTimeLong:    ENABLED |
    fhir-server |byValueDecimal                 | Composite   | false     | internalindex  | valueDecimal:    ENABLED |
    fhir-server |byValueInteger                 | Composite   | false     | internalindex  | valueInteger:    ENABLED |
    fhir-server |byValueString                  | Composite   | false     | internalindex  | valueString:    ENABLED |
    fhir-server |vertices                       | Mixed       | false     | search         | display:      ENABLED |
    fhir-server |                               |             |           |                | value:        ENABLED |
    fhir-server |                               |             |           |                | valueCode:    ENABLED |
    fhir-server |                               |             |           |                | valueString:    ENABLED |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |Graph Index (Edge)             | Type        | Unique    | Backing        | Key:           Status |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |Relation Index (VCI)           | Type        | Direction | Sort Key       | Order    |     Status |
    fhir-server |---------------------------------------------------------------------------------------------------
    fhir-server |
    fhir-server |[2/19/22, 1:20:15:061 UTC] 00000071 Clock         I com.datastax.oss.driver.internal.core.time.Clock getInstance Using native clock for microsecond precision
    fhir-server |[2/19/22, 1:20:15:132 UTC] 00000027 ExecutorServi I org.janusgraph.diskstorage.configuration.ExecutorServiceBuilder buildFixedExecutorService Initiated fixed thread pool of size 10
    fhir-server |[2/19/22, 1:20:15:197 UTC] 00000027 UniqueInstanc I org.janusgraph.graphdb.idmanagement.UniqueInstanceIdRetriever getOrGenerateUniqueInstanceId Generated unique-instance-id=0a04031316-fhir2
    fhir-server |[2/19/22, 1:20:15:256 UTC] 00000084 Clock         I com.datastax.oss.driver.internal.core.time.Clock getInstance Using native clock for microsecond precision
    fhir-server |[2/19/22, 1:20:15:331 UTC] 00000027 ExecutorServi I org.janusgraph.diskstorage.configuration.ExecutorServiceBuilder buildFixedExecutorService Initiated fixed thread pool of size 10
    fhir-server |[2/19/22, 1:20:15:332 UTC] 00000027 Backend       I org.janusgraph.diskstorage.Backend getIndexes Configuring index [search]
    fhir-server |[2/19/22, 1:20:15:373 UTC] 00000027 ExecutorServi I org.janusgraph.diskstorage.configuration.ExecutorServiceBuilder buildFixedExecutorService Initiated fixed thread pool of size 4
    fhir-server |[2/19/22, 1:20:15:541 UTC] 00000027 KCVSLog       I org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller initializeTimepoint Loaded unidentified ReadMarker start time 2022-02-19T01:20:15.541046Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@2c491de4
    
    1. Check the Capabilities endpoint, it should respond with a wealth of input
    curl --request GET \
      --url https://localhost:9443/fhir-server/api/v4/metadata \
      --header 'Content-Type: application/fhir+json' \
      --header 'X-FHIR-TENANT-ID: default' \
      -k -o /dev/null -I
    
    1. Copy over the Jar file term/fhir-term-graph-loader/target/fhir-term-graph-loader-4.11.0-SNAPSHOT-cli.jar

    2. Loading the Data for your specific use-case.

    • CodeSystem
    java -jar fhir-term-graph-loader-4.11.0-SNAPSHOT-cli.jar -config ./janusgraph-cassandra-elasticsearch.properties -file CodeSystem.json
    

    Also -url

    • Snomed
    java -jar fhir-term-graph-loader-4.11.0-SNAPSHOT-cli.jar -config ./janusgraph-cassandra-elasticsearch.properties \
        -base snomed/ \
        -concept concept.cpt \
        -relation relation.rt \
        -desc desc.file \
        -lang lang.file
    
    • UMLS
    java -jar fhir-term-graph-loader-4.11.0-SNAPSHOT-cli.jar -config ./janusgraph-cassandra-elasticsearch.properties
    

    I don’t actually execute these commands in this blog as it is licensed content.

    1. Execute the queries for the terminology system.

    There are some ancillary tasks you should do when done:

    A. Shutdown the Containers when you are done.

    nerdctl compose down 
    

    B. List current moby images.

    nerdctl images
    
  • Recipe: Setting up IBM FHIR Server and Azure in Development

    The IBM FHIR Server has support for exporting and importing Bulk Data using extended operations for Bulk Data $import, $export and $bulkdata-status, which are implemented as Java Maven projects. The IBM FHIR Server uses JSR252 JavaBatch jobs running in the Open Liberty Java Batch Framework to enable access to Large Volumes of HL7 FHIR data.

    This blog is a follow on to Recipe: IBM FHIR Server – Using Bulk Data with the Azure Blob Service, and provides a docker-compose file that works with the Azure emulator called Azurite.

    Typically, you can run the container locally:

    docker run -p 10000:10000 -v /local/path/to/azurite:/data mcr.microsoft.com/azure-storage/azurite \
        azurite-blob --blobHost 0.0.0.0 --blobPort 10000

    Recipe

    1. Pull the image

    $docker pull mcr.microsoft.com/azure-storage/azurite
    
    Using default tag: latest
    latest: Pulling from azure-storage/azurite
    396c31837116: Pull complete 
    9e7b0c9574dd: Pull complete 
    ec07c04a8d4c: Pull complete 
    c1eb01e62785: Pull complete 
    2cbc599970e9: Pull complete 
    a0ee56369073: Pull complete 
    ad1956587082: Pull complete 
    29652032eab7: Pull complete 
    Digest: sha256:4d40e97bf9345c9e321f4b8cf722dc4615d5d6080fd2953844be288a13eadb59
    Status: Downloaded newer image for mcr.microsoft.com/azure-storage/azurite:latest
    mcr.microsoft.com/azure-storage/azurite:latest

    2. Download the docker-compose.yml and put it in a working folder

    3. Download the fhir-server-config.json and put it in the same working folder

    4. Create a folder azurite in the working folder.

    5. The file layout should look like the following:

    6. Startup the Docker Compose

    nerdctl --address /var/run/docker/containerd/containerd.sock compose up

    You can then upload data, and use the Azurite emulator, the key is:

    "storageProviders": {
         "default" : {
         "type": "azure-blob",
         "bucketName": "fhirbulkdata",
         "auth" : {
              "type": "connection",
              "connection": "DefaultEndpointsProtocol=http;AccountName=account1;AccountKey=key1;BlobEndpoint=http://azure-blob:10000/account1;"
         },
         "disableOperationOutcomes": true,
         "duplicationCheck": false, 
         "validateResources": false, 
         "create": false
         }
    }