Primer: Tips for using Random Test Files

On Mac and Linux, create a random file (
dd if=/dev/urandom of=random-test-file-100m bs=1024k count=100

$ dd if=/dev/urandom of=random-test-file-100m bs=1024k count=100
100+0 records in
100+0 records out
102400000 bytes transferred in 10.143507 secs (10095128 bytes/sec)

This is a random file. You should also use shasum -A 256 and make sure the file you are using for test is what you get back. Do also remember 1024 bytes to a KB… use blocksizes of 1024

On linux, create a fast file (
fallocate -l 100M test-file-100m

This takes less than a second. The file is not good for testing random behaviors with large files as the generated file is huge, but the same bytes repated for 100M.

To check the size of the sample files, you can use human readable format settings (

MacBook-Air:test user$ ls -lSh
total 24
-rw-r–r– 1 user staff 10K Dec 20 19:53 setup.log

Deleting a bad changelog in Liquibase

1 – Identify the change set you need to remove (to run again)

<changeSet author=“me” id=“2.1.0-x-table”>
<sqlFile path=“/mycustomsql.sql”
relativeToChangelogFile=false” stripComments=“true” />

2 – As your db instance owner, remove the change log entry
[db2inst1@db myfolder]$ db2 “DELETE from myschema.DATABASECHANGELOG WHERE ID = ‘2.1.0-x-table'”
DB20000I The SQL command completed successfully.

3 – Confirm the change log is removed
[db2inst1@db myfolder]$ db2 “select * from myschema.DATABASECHANGELOG WHERE ID = ‘2.1.0-x-table’”

Quick Method to see Kafka-Broker uptime

Quick Method to see Kafka-Broker uptime

lstart and etime are the actual start time and the actual elapsed time since start

[userid@kafka-server ~]$ ps -eo pid,comm,lstart,etime,time,args | grep -i kafka | grep -v grep
9863 java Thu Nov 3 16:19:38 2017 05:25:55 00:09:55 /usr/jdk64/java-1.8.0-openjdk- -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/var/log/kafka/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dkafka.logs.dir=/var/log/kafka -Dlog4j.configuration=file:/usr/iop/current/kafka-broker/bin/../config/ -cp :/kafka-broker/bin/../libs/* -Xmx8g -Xms8g kafka Kafka /kafka-broker/config/

HBase Metadata – How to Scan

I learned this tip from a colleague.  You can easily scan the hbase:meta to find the timestamp an hbase table was created.  Details on the metadata can be found on the O’Reilly website.

sudo su - hbase
/usr/bin/kinit -kt /etc/security/keytabs/<KEYTAB FILE> $(/usr/bin/klist -kt /etc/security/keytabs/hbase.headless.keytab | tail -n 1 | awk '{print $NF}')
cat << EOF | /usr/iop/current/hbase-client/bin/hbase shell
scan 'hbase:meta'

Handy DB2 Tool to test Connectivity

[root@server]# java**************** -cp WEB-INF/lib/db2jcc4-4.19.49.jar -url "jdbc:db2://:50443/database:sslConnection=true;" -user db2user -password ******** -tracing

[jcc][10521][13706]Command : java -url jdbc:db2://:50443/database:sslConnection=true; -user db2user -password ******** -tracing

[jcc][time:2017-07-14-01:48:18.880][Thread:main][tracepoint:10] DataSource created. Table size: 1
 [jcc] dsdriverConfigFile=null
 [jcc] Driver: IBM Data Server Driver for JDBC and SQLJ 4.19.49
 [jcc] Compatible JRE versions: { 1.6, 1.7 }
 [jcc] Target server licensing restrictions: { z/OS: disabled; SQLDS: disabled; iSeries: disabled; DB2 for Unix/Windows: disabled; Cloudscape: enabled; Informix: enabled }
 [jcc] License editions: { O: not found; ZS: not found; IS: not found; AS: not found; EE: not found; PE: not found }
 [jcc] Range checking enabled: true
 [jcc] Bug check level: 0xff
 [jcc] Default fetch size: 64
 [jcc] Default isolation: 2
 [jcc] Collect performance statistics: false
 [jcc] No security manager detected.
 [jcc] Detected local client host: client/ip
 [jcc] Access to package is permitted by security manager.
 [jcc] JDBC 1 system property jdbc.drivers = null
 [jcc] Java Runtime Environment version 1.8.0
 [jcc] Java Runtime Environment vendor = IBM Corporation
 [jcc] Java vendor URL =
 [jcc] Java installation directory = /opt/ibm/ibm-java-sdk-8.0-4.5/jre
 [jcc] Java Virtual Machine specification version = 1.8
 [jcc] Java Virtual Machine specification vendor = Oracle Corporation
 [jcc] Java Virtual Machine specification name = Java Virtual Machine Specification
 [jcc] Java Virtual Machine implementation version = 2.8
 [jcc] Java Virtual Machine implementation vendor = IBM Corporation
 [jcc] Java Virtual Machine implementation name = IBM J9 VM
 [jcc] Java Runtime Environment specification version = 1.8
 [jcc] Java Runtime Environment specification vendor = Oracle Corporation
 [jcc] Java Runtime Environment specification name = Java Platform API Specification
 [jcc] Java class format version number = 52.0
 [jcc] Java class path = WEB-INF/lib/db2jcc4-4.19.49.jar
 [jcc] Java native library path = /opt/ibm/ibm-java-sdk-8.0-4.5/jre/lib/amd64/compressedrefs:/opt/ibm/ibm-java-sdk-8.0-4.5/jre/lib/amd64:/usr/lib64:/usr/lib
 [jcc] Path of extension directory or directories = /opt/ibm/ibm-java-sdk-8.0-4.5/jre/lib/ext
 [jcc] Operating system name = Linux
 [jcc] Operating system architecture = amd64
 [jcc] Operating system version = 3.10.0-327.10.1.el7.x86_64
 [jcc] File separator ("/" on UNIX) = /
 [jcc] Path separator (":" on UNIX) = :
 [jcc] User's account name = root
 [jcc] User's home directory = /root
 [jcc] User's current working directory = /tmp
 [jcc] JCC outputDirectory = /tmp
 [jcc] Using global configuration settings:
 [jcc] maxTransportObjects = 1000
 [jcc] Dumping all system properties: { java.vendor=IBM Corporation,,******,, ...,, sun.jnu.encoding=UTF-8,, file.separator=/, Platform API Specification,, java.class.version=52.0,, java.home=/opt/ibm/ibm-java-sdk-8.0-4.5/jre, 1.8.0 Linux amd64-64 Compressed References 20170419_344392 (JIT enabled, AOT enabled)
 J9VM - R28_20170419_1004_B344392
 JIT - tr.r14.java_20170419_344392
 GC - R28_20170419_1004_B344392_CMPRSS
 J9CL - 20170419_344392, os.version=3.10.0-327.10.1.el7.x86_64, java.awt.fonts=, }
 [jcc] Dumping all file properties: { }
 [jcc] Attempting connection to :50443/database
 [jcc] Using properties: { maxStatements=0, currentPackagePath=null, currentLockTimeout=-2147483647, timerLevelForQueryTimeOut=0, optimizationProfileToFlush=null, timeFormat=1, monitorPort=0, sendCharInputsUTF8=0, LOCKSSFU=null, alternateGroupDatabaseName=null, extendedTableInfo=0, sendDataAsIs=false, stripTrailingZerosForDecimalNumbers=0, diagLevelExceptionCode=0, returnAlias=1, supportsAsynchronousXARollback=2, sessionTimeZone=null, pkList=null, atomicMultiRowInsert=0, traceFileCount=2, DEBUG=null, IFX_UPDDESC=1, traceDirectory=null, maxRowsetSize=32767, driverType=4, extendedDiagnosticLevel=240, accountingInterval=null, monitoredDataSourceName=null, concurrentAccessResolution=0, LKNOTIFY=yes, clientProgramName=null, enableAlternateGroupSeamlessACR=false, connectNode=-1, traceFileSize=1048576, progressiveStreaming=0, profileName=null, DBMAXPROC=null, // }
 [jcc][am] [time:2017-07-14-01:48:18.972][Thread:main][tracepoint:100]Connection start time: 1499996898972
 [jcc][am] [time:2017-07-14-01:48:18.974][Thread:main][tracepoint:101]securityMechanism applied on connection object=3
 [jcc][t4] [time:2017-07-14-01:48:19.016][Thread:main][tracepoint:111]Connection isClosed: true. getApplicableTimeout (false) returning: 0
 [jcc][t4] [time:2017-07-14-01:48:19.016][Thread:main][tracepoint:111]Connection isClosed: true. getApplicableTimeout (true) returning: 0
 [jcc][t4] [time:2017-07-14-01:48:19.016][Thread:main][tracepoint:316]creating a socket to at 50443
 [jcc][t4] [time:2017-07-14-01:48:20.529][Thread:main][tracepoint:100]OpenSSLAction creating socket with tcipTimeout: 0 and so_timeout: 0
 [jcc][t4] [time:2017-07-14-01:48:20.538][Thread:main][tracepoint:320]acrossAlternateGroup_=false
 [jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D 
 [jcc][t4] [time:2017-07-14-01:48:20.596][Thread:main][tracepoint:101]Request flushed.
 [jcc][t4] [time:2017-07-14-01:48:20.596][Thread:main][tracepoint:111]Connection isClosed: true. getApplicableTimeout (true) returning: 0
 [jcc][t4] [time:2017-07-14-01:48:20.596][Thread:main][tracepoint:102]Reply to be filled.

 [jcc][ResultSetMetaData@7f399260] BEGIN TRACE_RESULT_SET_META_DATA
 [jcc][ResultSetMetaData@7f399260] Result set meta data for statement Statement@89b322dd
 [jcc][ResultSetMetaData@7f399260] Number of result set columns: 1
 isDescribed=true[jcc][ResultSetMetaData@7f399260] Column 1: { label=1, name=1, type name=INTEGER, type=4, nullable=0, precision=10, scale=0, schema name=, table name=, writable=false, sqlPrecision=0, sqlScale=0, sqlLength=4, sqlType=496, sqlCcsid=0, sqlArrExtent=0, sqlName=1, sqlLabel=null, sqlUnnamed=1, sqlComment=null, sqludtxType=, sqludtRdb=, sqludtSchema=, sqludtName=, sqlxKeymem=0, sqlxGenerated=0, sqlxParmmode=0, sqlxOptlck=0, sqlxCorname=null, sqlxName=null, sqlxBasename=null, sqlxUpdatable=0, sqlxSchema=null, sqlxRdbnam=DATABASE, internal type=4, is locator parameter=false }
 [jcc][ResultSetMetaData@7f399260]{ sqldHold=1, sqldReturn=0, sqldScroll=0, sqldSensitive=0, sqldFcode=85, sqldKeytype=0, sqldRdbnam=, sqldSchema=null }
 [jcc][ResultSetMetaData@7f399260] END TRACE_RESULT_SET_META_DATA
 [jcc][Time:2017-07-14-01:48:20.685][Thread:main][PreparedStatement@89b322dd]executeQuery () returned
 [jcc][Thread:main][SystemMonitor:stop] core: 24.900053999999997ms | network: 5.893339999999999ms | server: 1.162ms [STMT@-1984748835]
 [jcc][Time:2017-07-14-01:48:20.685][Thread:main][ResultSet@5e36998]close () called
 [jcc][Time:2017-07-14-01:48:20.686][Thread:main][ResultSet@5e36998]closeX (null, called
 [jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
 [jcc][Connection@980bc6a1] DB2 LUWID:
 [jcc][Time:2017-07-14-01:48:20.935][Thread:main][Connection@980bc6a1]commit () returned null
 [jcc][Thread:main][SystemMonitor:stop] core: 2.103266ms | network: 1.58358ms | server: 0.013000000000000001ms
 [jcc][Time:2017-07-14-01:48:20.935][Thread:main][Connection@980bc6a1]close () called
 [jcc][Connection@980bc6a1] DB2 LUWID:
 [jcc][t4] [time:2017-07-14-01:48:20.935][Thread:main][tracepoint:202] closing non-pooled Transport

java****** -cp db2jcc4-4.19.49.jar -url "jdbc:db2://:50443/DATABASE:sslConnection=true;" -user  -password  -tracing

Kafka – Handy Commands

Kafka – Handy Commands

Check Partition Count and Replication Factor
  bin/ --zookeeper `hostname --long`:2181 --describe --topic replication
 Topic:replication PartitionCount:20 ReplicationFactor:3 Configs:
  Topic: replication Partition: 0 Leader: 1005 Replicas: 1005,1001,1002 Isr: 1001,1002,1005
  Topic: replication Partition: 1 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1001,1003,1002
  Topic: replication Partition: 2 Leader: 1002 Replicas: 1002,1003,1004 Isr: 1002,1003,1004
  Topic: replication Partition: 3 Leader: 1003 Replicas: 1003,1004,1005 Isr: 1003,1004,1005
  Topic: replication Partition: 4 Leader: 1004 Replicas: 1004,1005,1001 Isr: 1001,1004,1005
  Topic: replication Partition: 5 Leader: 1005 Replicas: 1005,1002,1003 Isr: 1003,1002,1005

Alter Partitions

 bin/ --zookeeper `hostname --long`:2181 --alter --topic file-local --partitions 20
 WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
 Adding partitions succeeded!

Can only be increased

Alter Replication Factor


[root@kafka-1 kafka-broker]# /usr/iop/current/kafka-broker/bin/ --add --allow-host '*' --allow-principal 'User:CN=kafka-1.local,C=US' --authorizer --authorizer-properties zookeeper.connect=`hostname --long`:2181 --topic fhir-local --group '*' --operation 'All'
 [2017-07-20 11:30:49,713] WARN read null data from /kafka-acl-changes/acl_changes_0000000420 when processing notification acl_changes_0000000420 (kafka.common.ZkNodeChangeNotificationListener)
 [2017-07-20 11:30:49,717] WARN read null data from /kafka-acl-changes/acl_changes_0000000421 when processing notification acl_changes_0000000421 (kafka.common.ZkNodeChangeNotificationListener)
 [2017-07-20 11:30:49,736] WARN read null data from /kafka-acl-changes/acl_changes_0000000422 when processing notification acl_changes_0000000422 (kafka.common.ZkNodeChangeNotificationListener)
 Adding ACLs for resource `Topic:fhir-local`:
  User:CN=kafka-1.local,C=US has Allow permission for operations: All from hosts: *

Adding ACLs for resource `Group:*`:
  User:CN=kafka-1.local,C=US has Allow permission for operations: All from hosts: *