Blog

  • Docker Compose and Not Able to set a TTY in a Git Hub actions

    In my GitHub Repository IBM FHIR Server, we use GitHub Actions to execute our Continuous Integration (CI) workflows. The CI workflows enable us to execute our code in complicated scenarios – a database (IBM Db2, Postgres), events system (Kafka (two zookeeper, two brokers), NATS) in a mix of various configurations.

    I started to enable Docker Compose with Kafka and the IBM FHIR Server’s Audit module.

    I ran into this error when running docker-compose exec in my workflow’s run.

    TEST_CONFIGURATION: check that there is output and the configuration works
    the input device is not a TTY
    Error: Process completed with exit code 1.
    

    It turns out this is well known in GitHub Actions https://github.com/actions/runner/issues/241#issuecomment-745902718 with a great example of a fix at https://github.com/gfx/example-github-actions-with-tty/blob/4c5f457c65dfe61e273e0470414699420be5e134/.github/workflows/test.yml#L18

    I ended up changing my workflow, and it passed!

    The key being in setting the shell value – shell: 'script -q -e -c "bash {0}"'

    - name: Server Integration Tests - Audit
    env:
    WORKSPACE: ${{ github.workspace }}
    shell: 'script -q -e -c "bash {0}"'
    run: |
    bash build/audit/bin/pre-integration-test.sh ${{matrix.audit}}
    bash build/audit/bin/integration-test.sh ${{matrix.audit}}
    bash build/audit/bin/post-integration-test.sh ${{matrix.audit}}
    
  • IBM FHIR Server – Using the Docker Image with Near Feature and FHIR Examples from Jupyter Notebooks

    Hi Everyone.

    Thanks for sitting down and watching this video. I’m going to show you how to quickly spin up a Docker image of IBM FHIR Server, check the logs, make sure it’s healthy, and how to use the fhir-examples module with the near search.

    The following are the directions followed in the video:

    Navigate to DockerHub: IBM FHIR Server

    Run the Server docker run -p9443:9443 ibmcom/ibm-fhir-server

    Note, startup may take 2 minutes as the image is bootstrapping a new Apache Derby database in the image. To use Postgres or IBM Db2, please review the documentation.

    Review the docker logs

    Check the server is up and operational curl -k -i -u 'fhiruser:change-password' 'https://localhost:9443/fhir-server/api/v4/$healthcheck'

    You now have a running IBM FHIR Sever.

    Let’s load some data using a Jupyter Notebook.

    The IBM FHIR Server team wraps specification and service unit tests into a module called fhir-examples and posts to Bintray: ibm-fhir-server-releases or go directly to the repository.

    We’re going to use the python features and Jupyter Notebook to process the fhir-examples.

    We’ll download the zip, filter the interesting jsons, and upload to the IBM FHIR Server in a loop.

    entries = z.namelist()
    for entry in entries:
    if entry.startswith('json/ibm/bulk-data/location/'):
    f = z.open(entry);
    content = f.read()
    r = requests.post('https://localhost:9443/fhir-server/api/v4/Location',
    data=content,
    headers=headers,
    auth=httpAuth,
    verify=False)
    print('Done uploading - ' + entry)
    

    We’re going to query the data on the IBM FHIR Server using the Search Query Parameter near to search within 10Km of Cambridge Massachusetts.

    queryParams = {
    'near': '42.373611|-71.110558|10|km',
    "_count" : 200
    }
    

    Note, the IBM FHIR Server includes some additional search beyond the UCUM and WS48 units and it’s listed in at the Conformance page.

    We’ll normalize this data and put in a Pandas dataframe.

    From the dataframe, we can now add markers to the page.

    cambridge = [ 42.373611, -71.11000]
    map_cambridge_locs_from_server = folium.Map(location=cambridge, zoom_start=10)
    
    # Iterate through the Rows
    for location_row in location_rows :
    # print(location_row)
    # Cast the values into the appropriate types as FOLIUM will die weirdly without it.
    lat_inc = float(location_row['resource.position.latitude'])
    long_inc = float(location_row['resource.position.longitude'])
    name_inc = str(location_row['resource.name'])
    #print(lat_inc)
    #print(long_inc)
    #print(name_inc)
    label = folium.Popup(name_inc, parse_html=True)
    folium.CircleMarker(
    [lat_inc, long_inc],
    radius=5,
    popup=label,
    fill=True,
    fill_color='red',
    fill_opacity=0.7).add_to(map_cambridge_locs_from_server)
    map_cambridge_locs_from_server
    

    You can see the possibilities with the IBM FHIR Server and the near search.

    Reference

  • jq fu for FHIR Capability Statements

    The following takes the Capability Statements and generates the minimum coverage set of Resources across implementation guides based on the capability statements.

    Resources and Profiles output in Markdown

    This is handy to check the number of profiles in use (beyond the base spec) and to know the profiles used on the server based on the capability statements.

    Call

    cat capabilitystatements/*.json | jq -r '.rest[].resource[]| "|\(.type)|\(.supportedProfile)|"' | sort -u

    Output

    Resource Profiles
    AllergyIntolerance http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance
    CarePlan http://hl7.org/fhir/us/core/StructureDefinition/us-core-careplan
    CareTeam http://hl7.org/fhir/us/core/StructureDefinition/us-core-careteam
    Condition http://hl7.org/fhir/us/core/StructureDefinition/us-core-condition
    Device http://hl7.org/fhir/us/core/StructureDefinition/us-core-implantable-device

    Resource and Operations

    To check the operations rrequired in the implementation guides, you can use the following to process a set of capability statements into a useful outputs.

    Call

    cat capabilitystatements/*.json | jq -r '.rest[].resource[]| "|\(.type)|\(.operation)|"' | grep -v null
    

    Output

    Resource Operation Conformance
    ValueSet $expand SHOULD

    I hope this helps.

  • Getting Explain to work with IBM Db2 on Cloud

    My team has been running more workloads on IBM Cloud, more specifically with IBM Db2. Our daily tools are slightly different to work with in the cloud, less administrative access and tools we can access on the host – db2batch, db2advis, db2expln and other native tools.

    That’s when I ran across some great references that lead me in a direction that works for my team.

    • Create a User (with Password)
    • Catalog the Remote Database
    • Run db2expln

    Login to the IBM Cloud console

    Click Open Console

    Expand Settings

    Click Manage Users

    Click Add

    Click Add User

    Enter the relevant details for the user

    Click Create

    I use my db2 docker container

    Setup the SSL

    mkdir -p /database/config/db2inst1/SSL_CLIENT
    chmod -R 755 /database/config/db2inst1/SSL_CLIENT
    /database/config/db2inst1/sqllib/gskit/bin/gsk8capicmd_64 -keydb \
       -create -db "/database/config/db2inst1/SSL_CLIENT/ibmca.kdb" \
       -pw "passw0rd" -stash
    /database/config/db2inst1/sqllib/gskit/bin/gsk8capicmd_64 -cert \ 
       -add -db "/database/config/db2inst1/SSL_CLIENT/ibmca.kdb" \ 
       -pw "passw0rd" -file sqllib/cfg/DigiCertGlobalRootCA.arm
    chmod 775 /database/config/db2inst1/SSL_CLIENT/ibmca.kdb
    chmod 775 /database/config/db2inst1/SSL_CLIENT/ibmca.sth
    

    Configure the database

    db2 update dbm cfg using SSL_CLNT_KEYDB \
       /database/config/db2inst1/SSL_CLIENT/ibmca.kdb
    db2 update dbm cfg using SSL_CLNT_STASH 
       /database/config/db2inst1/SSL_CLIENT/ibmca.sth
    db2 update dbm cfg using keystore_location 
       /database/config/db2inst1/SSL_CLIENT/ibmca.kdb
    

    Restart the database

    db2stop
    db2start
    

    Catalog the database

    db2 catalog tcpip node cdtdb1 remote \
       dashdb-txn-flex-yp-xxxx-xxxx.services.dal.bluemix.net server 50001 security ssl
    db2 catalog db bludb as fhirblu4 at node cdtdb1
    db2 connect to fhirblu4 user testpaul using ^PASSWORD^
    

    If you have a problem connecting, log out of db2inst1 and log back in. It’ll activate the db2profile again.

    Run db2expln

    db2expln -d fhirblu4 -u testpaul "^PASSWORD^" -graph -f 1.sql \
       -terminator ';' -o 1.out
    
    Optimizer Plan:
    
    Rows
    Operator
    (ID)
    Cost
    
    10
    RETURN
    ( 1)
    412211
    |
    10
    TBSCAN
    ( 2)
    412211
    |
    10
    SORT
    ( 3)
    412211
    |
    77909.3
    HSJOIN
    ( 4)
    412164
    /------------------------/ \-----------------------\
    88591.5 311638
    TBSCAN TBSCAN
    ( 5) ( 8)
    410438 1594.97
    +---------------------------++--------------------------------+ |
    354367 0.00309393 1 311638
    Table: IXSCAN IXSCAN Table:
    FHIRDATA2 ( 6) ( 7) FHIRDATA2
    OBSERVATION_RESOURCES 7.52927 7.57425 OBSERVATION_LOGICAL_RESOURCES
    | |
    2.18146e+06 1.24649e+06
    Index: Index:
    FHIRDATA2 FHIRDATA2
    IDX_OBSERVATION_TOKEN_VALUES_RPS IDX_OBSERVATION_STR_VALUES_RPS 
    
    Relevant References
    https://www.ibm.com/cloud/blog/how-to-use-an-api-key-or-access-token-to-connect-to-ibm-db2-on-cloud
    https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.admin.sec.doc/doc/t0053518.html
    https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.sec.doc/doc/c0070395.html
    https://developer.ibm.com/recipes/tutorials/ssl-how-to-configure-it-on-db2/
    https://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.admin.sec.doc/doc/t0012036.html
    https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.security.doc/doc/iam.html
    https://developer.ibm.com/recipes/tutorials/ssl-how-to-configure-it-on-db2/#r_step8

    Addendum

    These are the containers settings for SSL:

    db2inst1@4dda34a66a99 ~]$ db2 get dbm cfg | grep -i ssl
    SSL server keydb file (SSL_SVR_KEYDB) = /database/config/db2inst1/SSL_CLIENT/ibmca.kdb
    SSL server stash file (SSL_SVR_STASH) = /database/config/db2inst1/SSL_CLIENT/ibmca.sth
    SSL server certificate label (SSL_SVR_LABEL) =
    SSL service name (SSL_SVCENAME) =
    SSL cipher specs (SSL_CIPHERSPECS) =
    SSL versions (SSL_VERSIONS) =
    SSL client keydb file (SSL_CLNT_KEYDB) = /database/config/db2inst1/SSL_CLIENT/ibmca.kdb
    SSL client stash file (SSL_CLNT_STASH) = /database/config/db2inst1/SSL_CLIENT/ibmca.sth
    Keystore location (KEYSTORE_LOCATION) = /database/config/db2inst1/SSL_CLIENT/ibmca.kdb
    

    Db2 Top with remote db

    db2top -d fhirpdm -n pdmperf -u bluadmin -p password-removed

    Run with setup.sql

    db2expln -d fhirdb -setup setup.sql -g -z \; -f uniq.sql -o plan.txt

  • Release 4.2.2 – Notes

    My Team just released IBM FHIR Server 4.2.2. Other than the amazing things documented and released with the release tab, I learned a few things.

    Replace Tags

    If you need to replace tags, force it with fetch

    ~$ git fetch --tags -f
    From github.com:IBM/FHIR
    t [tag update] 4.1.0 -> 4.1.0
    t [tag update] 4.2.2 -> 4.2.2
    

    Rebuild the Validation Package

    export BUILD_TYPE=RELEASE
    export BUILD_VERSION=4.2.2
    bash build/release/version.sh
    
    mvn ${THREAD_COUNT} -ntp -B clean source:jar source:test-jar javadoc:jar \
    install -f fhir-parent -Pfhir-validation-distribution,fhir-ig-carin-\
    bb,fhir-ig-davinci-pdex-plan-net,fhir-ig-mcode,fhir-ig-us-core,deploy-\
    bintray -DskipTests -pl ../fhir-ig-davinci-pdex-plan-net/,../fhir-\
    validation -amd
    

    -amd keeps the build focused only on the necessary packages (not the full fhir-parent)

    Idempotent Execution of the Role Creation

    su - db2inst1 -c "db2 \"connect to fhirdb\" && db2 \" BEGIN IF (SELECT ROLENAME FROM SYSCAT.ROLES WHERE ROLENAME = 'FHIRSERVER') IS NULL THEN EXECUTE IMMEDIATE 'CREATE ROLE FHIRSERVER'; END IF; END;\""

    su - db2inst1 -c "db2 \"connect to fhirdb\" && db2 \" BEGIN IF (SELECT ROLENAME FROM SYSCAT.ROLES WHERE ROLENAME = 'FHIRBATCH') IS NULL THEN EXECUTE IMMEDIATE 'CREATE ROLE FHIRBATCH'; END IF; END;\""

    Shell Pipestatus

    Checking the Status of any command in a pipe, it was helpful in some automation where I had to wait on a jar to finish and check the output. Source

    Command

    curl -L https://google.com | grep response | tee response.txt
    RC=${PIPESTATUS[1]}
    echo $RC
    

    Output

    4
    

    Reference

  • Apache Nifi and IBM FHIR Server: InvokeHTTP and SSL

    A user who is integrating Apache Nifi and IBM FHIR Server asked how they get the SSL to work between the two, and here is a small recipe for you:

    1. List Keys
    keytool -list -keystore \
      fhir-server-dist/wlp/usr/servers/fhir-server/resources/security/fhirKeyStore.p12 \
      -storepass change-password -rfc
    

    Check to see if you have a default, if you do, go to step 2, else step 3.

    1. Change default
    keytool -changealias -keystore \
      fhir-server-dist/wlp/usr/servers/fhir-server/resources/security/fhirKeyStore.p12 \
       -storepass change-password -alias default -destalias old_default
    

    You can always double check with step 3.

    1. Create a new default with a distinguished name for your hostname (mine is host.docker.internal)
    keytool -genkey -keyalg RSA -alias default -keystore \
    fhir-server-dist/wlp/usr/servers/fhir-server/resources/security/fhirKeyStore.p12 \
      -storepass change-password -validity 2000 -keysize 2048 -dname cn=host.docker.internal
    
    1. Confirm the lists of keys
    keytool -list -keystore \
      fhir-server-dist/wlp/usr/servers/fhir-server/resources/security/fhirKeyStore.p12 \
      -storepass change-password
    
    Keystore type: PKCS12
    Keystore provider: SUN
    
    Your keystore contains 2 entries
    
    old_default, May 15, 2020, PrivateKeyEntry,
    Certificate fingerprint (SHA-256): 9D:94:C2:F8:C1:51:9B:0F:21:50:4F:BB:60:A4:8A:3F:AF:C0:F0:13:C4:80:BE:A3:94:42:04:46:56:DB:D9:7B
    default, May 15, 2020, PrivateKeyEntry,
    Certificate fingerprint (SHA-256): 5B:38:D5:FD:7F:8A:80:60:12:CF:7F:61:C6:D6:C5:54:F3:FD:F8:80:34:58:A5:3F:1C:8F:2C:0A:42:85:C0:49
    

    Notice, the new key.

    1. Restart your app server to pick up the latest. Once restarted, proceed to next step.

    2. Confirm you see the subject is the one you need.

    curl -k https://localhost:9443 -v 2>&1 | grep -i subject
    *  subject: CN=host.docker.internal
    
    1. Start a nifi image
    docker run -p 8080:8080 --rm apache/nifi:latest bash
    
    1. Find the docker container id
    $ docker ps
    CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                                   NAMES
    09d2a7395fa2        apache/nifi:latest   "../scripts/start.sh…"   7 seconds ago       Up 6 seconds        8000/tcp, 8443/tcp, 10000/tcp, 0.0.0.0:8080->8080/tcp   gracious_rosalind
    
    1. Copy the fhirKeystore.p12 (in this case we just updated this one only).
    docker cp fhir-server-dist/wlp/usr/servers/fhir-server/resources/security/fhirKeyStore.p12 \
      09d2a7395fa2:/fhirKeyStore.p12
    
    1. Login to Nifi – http://localhost:8080/nifi/?processGroupId=root&componentIds=1aef81c1-0172-1000-16cd-37702389d8d3

    2. Add an InvokeHTTP

      1. Click Configure
      2. Click on properties
      3. Enter Remote URL – https://host.docker.internal:9443/fhir-server/api/v4/metadata
      4. Enter Basic Authentication Username – fhiruser
      5. Enter Basic Authentication Password – change-password
      6. Click SSL Context Service
        1. Click the Drop Down
        2. Click Create Service – StandardRestrictedSSLContextService
        3. Click Create
        4. Click the Arrow to configure
        5. When prompted "Save changes before going to this Controller Service?", click Yes.
        6. Click Configure
        7. Click Properties
          1. Click Truststore Filename, and enter /fhirKeyStore.p12
          2. Click Truststore Passowrd, and enter change-password
          3. Click Truststore Type, and enter PKCS12
        8. Click Apply
        9. Check the State – Validating, you may have to refresh, until it says disabled.
        10. On the left, click enabled, and turn it on, and click enable. It may take a minute
        11. It’s basically set up, now let’s get some output.
    3. Add an LogMessage

      1. Select all Types
    4. Link the Two Nodes

    5. Click Play

    You’ll see your Nifi flow working.

    You can always use the docker image for the IBM FHIR Server https://hub.docker.com/r/ibmcom/ibm-fhir-server

  • jq fu

    Extracting a Resource from an Array

    Extracting a resource from a FHIR Bundle with over 10000 entries, and you know there is a problem at a specific resource, then you can use jq and array processing to extract the resource:

    single_patient_bundle-03-09-2020/9b3f6160-285d-4319-8d15-ac07ee3d3a8e.json \
        | jq '.entry[12672].resource'
    {
      "id": "99274e87-db14-43fa-9ada-2fcb6c1d68a6",
      "meta": {
        "profile": [
          "http://hl7.org/fhir/StructureDefinition/vitalspanel",
          "http://hl7.org/fhir/StructureDefinition/vitalsigns"
        ]
      },
      "status": "final",
      "resourceType": "Observation"
    }```

    Extracting two correlated values

    Extracting two correlated values, you can use the multiple selectors, such as the following:

    cat single_patient_bundle-03-09-2020/9b3f6160-285d-4319-8d15-ac07ee3d3a8e.json \ 
       | jq '.entry[] | "\(.status),\(.resourceType)"' | sort -u 
    final,Observation
    

    Checking the Supported Profiles on the IBM FHIR Server

    This is a handy curl to check what profiles are loaded on your IBM FHIR Server.

    Request

    curl -ks -u fhiruser:change-password https://localhost:9443/fhir-server/api/v4/metadata 2>&1 | jq -r '.rest[].resource[] | "\(.type),\(.supportedProfile)"'
    

    Processed Response

    PractitionerRole,["http://hl7.org/fhir/us/carin/StructureDefinition/carin-bb-practitionerrole|0.1.0","http://hl7.org/fhir/us/core/StructureDefinition/us-core-practitionerrole|3.1.0","http://hl7.org/fhir/us/davinci-pdex-plan-net/StructureDefinition/plannet-PractitionerRole|0.1.0"]
    Procedure,["http://hl7.org/fhir/us/core/StructureDefinition/us-core-procedure|3.1.0"]
    Provenance,["http://hl7.org/fhir/StructureDefinition/ehrsrle-provenance|4.0.1","http://hl7.org/fhir/StructureDefinition/provenance-relevant-history|4.0.1","http://hl7.org/fhir/us/core/StructureDefinition/us-core-provenance|3.1.0"]
    Questionnaire,["http://hl7.org/fhir/StructureDefinition/cqf-questionnaire|4.0.1"]
    QuestionnaireResponse,null
    RelatedPerson,["http://hl7.org/fhir/us/carin/StructureDefinition/carin-bb-relatedperson|0.1.0"]
    RequestGroup,["http://hl7.org/fhir/StructureDefinition/cdshooksrequestgroup|4.0.1"]
    ResearchDefinition,null
    ResearchElementDefinition,null
    ResearchStudy,null
    ResearchSubject,null
    

    Extracting Search Parameters with a Type Composite

    cat ./fhir-registry/definitions/search-parameters.json | jq -r '.entry[].resource | select(.type == "composite") | .expression' | sort -u

    ActivityDefinition.useContext
    CapabilityStatement.useContext | CodeSystem.useContext | CompartmentDefinition.useContext | ConceptMap.useContext | GraphDefinition.useContext | ImplementationGuide.useContext | MessageDefinition.useContext | NamingSystem.useContext | OperationDefinition.useContext | SearchParameter.useContext | StructureDefinition.useContext | StructureMap.useContext | TerminologyCapabilities.useContext | ValueSet.useContext
    ChargeItemDefinition.useContext
    DocumentReference.relatesTo
    EffectEvidenceSynthesis.useContext
    EventDefinition.useContext
    Evidence.useContext
    EvidenceVariable.useContext
    ExampleScenario.useContext
    Group.characteristic
    Library.useContext
    Measure.useContext
    MolecularSequence.referenceSeq
    MolecularSequence.variant
    Observation
    Observation | Observation.component
    Observation.component
    PlanDefinition.useContext
    Questionnaire.useContext
    ResearchDefinition.useContext
    ResearchElementDefinition.useContext
    RiskEvidenceSynthesis.useContext
    TestScript.useContext
    

    Extracting Composite Codes from Search Parameters

    cat ./fhir-registry/definitions/search-parameters.json | jq -r '.entry[].resource | select(.type == "composite") | .code'

    context-type-quantity
    context-type-value
    context-type-quantity
    context-type-value
    context-type-quantity
    context-type-value
    relationship
    ...
    chromosome-variant-coordinate
    chromosome-window-coordinate
    referenceseqid-variant-coordinate
    referenceseqid-window-coordinate
    code-value-concept
    code-value-date
    code-value-quantity
    code-value-string
    combo-code-value-concept
    combo-code-value-quantity
    component-code-value-concept
    component-code-value-quantity
    ...
    context-type-quantity
    context-type-value
    

    Handy Command to get Duplicate Search Parameters

  • Fun with Patent Data: Thomas Edison Jupyter Notebook

    Thomas Alva Edison was a famous American inventor and businessman, “described as America’s greatest inventor”, and was one of the most prolific inventors in US history. Thomas Edison was granted/filed 1084 patents from 1847-1931.[1] He’s just one cool inventor – lamps, light bulbs, phonograph and so many more life changing inventions.

    Google Patents has a wonderful depth of patent history, and the history is searchable with custom search strings:

    • inventor:(Thomas Edison) before:priority:19310101
    • inventor:(Paul R Bastide) after:priority:2009-01-01

    Google provides a seriously cool feature – a downloadable csv. Pandas anyone? The content is provided in an agreement between the USPTO and Google. Google also provides it as part of the Google APIs/Platform. The data is fundamentally public, and Google has made it very accessible with some GitHub examples. [2] The older patent data more difficult to search as the content has been scraped from Optical Character Recognition.

    I have found a cross-section of three things I am very interested in: History, Inventing and Data Science. Time to see what cool things about the Edison data.

    Step

    To start the playing with the data, one must install Jupyter.

    python3 -m pip install --upgrade pip
    python3 -m pip install jupyter

    Launch jupyter and navigate to the http://localhost:8888/tree

    jupyter notebook

    Load and Launch the notebook

    1. Download the Edison.ipynb
    2. Unzip the Edison.ipynb.zip
    3. Upload the Edison.ipynb to Jupyter
    4. Launch the Edison notebook and follow along with the cells.

    The notebook renders some interesting insights using numpy, pandas, matplotlib and scipy. The notebook includes a cell to install python libraries, and once one executes the per-requisites cell; all is loaded.

    The Jupyter notebook loads the data using an input cell, once run, the analytics enable me to see the number of co-inventors (but need to cleanse the data first).

    One notices that Thomas Alva is not an inventor in those results, as such one needs to modify to the notebook to use the API with more recent Inventors. With the comprehensive APIs from USPTO, one extracts patent data by one of a number of JSON REST APIs. Kudos to the USPTO to really open up the data and the API.

    Conclusion

    All-in the APIs/Python/Jupyter Notebook/Analysis are for fun, and provide insight into Thomas Edison’s patent data – one focused individual.

    References

    [1] Prolific Inventors https://en.wikipedia.org/wiki/List_of_prolific_inventors number wise it appears to conflict with https://en.wikipedia.org/wiki/List_of_Edison_patents which reports 1093 (it’s inclusive of design patents)
    [2] Google / USPTO Patent Data https://www.google.com/googlebooks/uspto-patents.html
    [3] USPTO Open Data https://developer.uspto.gov/about-open-data and https://developer.uspto.gov/api-catalog
    [4] PatentsView http://www.patentsview.org/api/faqs.html

  • AppDev: Zookeeper Port Forwarding to all servers from local machine

    To simply testing with Zookeeper on a remote Kafka cluster, one must connect to the client application ports on the backend.  When the remote Kafka cluster has multiple nodes and behind a firewall and a SSH jump server, the complexity is fairly high.  Note, the SSH jump server is the permitted man in the middle.    The client must allow application access to Zookeeper on Kafka – listening locally. Current techniques allow for a single port hosted on the developers machine for instance, 2181 listening on the local machine, and a single remote server.  This approach is not reliable – servers are taken out of service, added back, fail, or reroute to the master (another separate server).   

    PortDescription
    88/tcpKerberos
    2181/tcpzookeeper.property.clientPort

    A typical connection looks like: 

    ssh -J jump-server kafka-1 -L 2181:kafka-1:2181 "while true; do echo "waiting"; sleep 180; done"

      I worked to develop a small proxy. Setup hosts file. 1 – Edit /etc/hosts 2 – Add entry to hosts file

    127.0.0.1 kafka-1
    127.0.0.2 kafka-2
    127.0.0.3 kafka-3
    127.0.0.4 kafka-4
    127.0.0.5 kafka-5

    3 – Save the hosts file 4 – Setup Available interfaces (1 for each unique service) 1 is already up and in use (you only need to add the extras)

    sudo ifconfig lo0 alias 127.0.0.2 up
    sudo ifconfig lo0 alias 127.0.0.3 up
    sudo ifconfig lo0 alias 127.0.0.4 up
    sudo ifconfig lo0 alias 127.0.0.5 up

    5 – Setup the port forwarding, forward to jump server ssh -L 30991:localhost:30991 jump-server 6 – Forward to Kafka server ssh -L 30991:localhost:2181 kafka-1 7 – Loop while on kafka server while true; do echo “waiting”; sleep 180; done 8 – Repeat for each kafka server increasing the port by 1 (refer to ports section for mapping) 9 – Setup the Terminal – node krb5-tcp.js 10 – Setup the Terminal – node proxy_socket.js

    echo stats | nc kafka-1 2181
    Zookeeper version: 3.4.6-IBM_4–1, built on 06/17/2016 01:58 GMT
    Clients:
    /192.168.12.47:50404[1](queued=0,recved=1340009,sent=1360508)
    /192.168.12.46:48694[1](queued=0,recved=1348346,sent=1368936)
    /192.168.12.48:39842[1](queued=0,recved=1341655,sent=1362178)
    /0:0:0:0:0:0:0:1:39644[0](queued=0,recved=1,sent=0)

    Latency min/avg/max: 0/0/2205
    Received: 4878752
    Sent: 4944171
    Connections: 4
    Outstanding: 0
    Zxid: 0x1830001944e
    Mode: follower
    Node count: 442

    11 – Use your code to access Zookeeper ServerReferences https://github.com/nodejitsu/node-http-proxy

    sudo ifconfig lo0 alias 127.0.0.6 up
    sudo ifconfig lo0 alias 127.0.0.7 up
    sudo ifconfig lo0 alias 127.0.0.8 up

    Configuration

    {

    "2181": {

    "type": "socket",

    "members": [

    { "hostname": "kafka-1", "port": 30991 },

    { "hostname": "kafka-2", "port": 30992 },

    { "hostname": "kafka-3", "port": 30993 },

    { "hostname": "kafka-4", "port": 30994 },

    { "hostname": "kafka-5", "port": 30995 }

    ]

    }

    }
    Jaas Configuration
    ./kerberos/src/main/java/demo/kerberos/jaas.conf
    TestClient {
    com.sun.security.auth.module.Krb5LoginModule required
    principal="ctest4@test.COM"
    debug=true
    useKeyTab=true
    storeKey=true
    doNotPrompt=false
    keyTab="/Users/paulbastide/tmp/kerberos/test.headless.keytab"
    useTicketCache=false;
    };

    Java – App.java

    package demo.kerberos;

    import javax.security.auth.*;
    import javax.security.auth.login.*;
    import javax.security.auth.callback.*;
    import javax.security.auth.kerberos.*;
    import java.io.*;

    public class App {
    public static void main(String[] args) {

    System.setProperty("java.security.auth.login.config",
    "/Users/paulbastide/tmp/kerberos/src/main/java/demo/kerberos/jaas.conf");
    System.setProperty("java.security.krb5.conf", "/Users/paulbastide/tmp/kerberos/krb5.conf");

    Subject mysubject = new Subject();
    LoginContext lc;

    try {

    lc = new LoginContext("TestClient", mysubject, new MyCallBackHandler());
    lc.login();

    } catch (LoginException e) {
    e.printStackTrace();
    }

    }

    }

    Java - MyCallBackHandler.java
    package demo.kerberos;

    import javax.security.auth.*;
    import javax.security.auth.login.*;
    import javax.security.auth.callback.*;
    import javax.security.auth.kerberos.*;
    import java.io.*;

    public class MyCallBackHandler implements CallbackHandler {
    public void handle(Callback[] callbacks)
    throws IOException, UnsupportedCallbackException {

    for (int i = 0; i < callbacks.length; i++) {
    System.out.println(callbacks[i]);
    }
    }
    }
  • AppDev: Forwarding DGram in node.js

    For a project I am working on I needed to rewrite a DGram port. I moved the ports around and found a few quick tests.

    Testing with NC

    my-machine:~$ echo -n “data-message” | nc -v -4u -w1 localhost 88
    found 0 associations
    found 1 connections:
    1: flags=82<CONNECTED,PREFERRED>
    outif (null)
    src 127.0.0.1 port 53862
    dst 127.0.0.1 port 88
    rank info not available
    Connection to localhost port 88 [udp/radan-http] succeeded!
    

    Rewriting incoming datagrams to another port

    You can run the sample, and get the results as follows

    server listening 0.0.0.0:88
    server got: j��0����