HBase Metadata – How to Scan

I learned this tip from a colleague.  You can easily scan the hbase:meta to find the timestamp an hbase table was created.  Details on the metadata can be found on the O’Reilly website.

sudo su - hbase
/usr/bin/kinit -kt /etc/security/keytabs/<KEYTAB FILE> $(/usr/bin/klist -kt /etc/security/keytabs/hbase.headless.keytab | tail -n 1 | awk '{print $NF}')
cat << EOF | /usr/iop/current/hbase-client/bin/hbase shell
scan 'hbase:meta'
EOF

Handy DB2 Tool to test Connectivity

[root@server]# java -Djavax.net.ssl.trustStore=keystore.jks -Djavax.net.ssl.trustStorePassword=******** -Djavax.net.ssl.keyStore=keystore.jks -Djavax.net.ssl.keyStorePassword=******** -cp WEB-INF/lib/db2jcc4-4.19.49.jar com.ibm.db2.jcc.DB2Jcc -url "jdbc:db2://:50443/database:sslConnection=true;" -user db2user -password ******** -tracing

[jcc][10521][13706]Command : java com.ibm.db2.jcc.DB2Jcc -url jdbc:db2://:50443/database:sslConnection=true; -user db2user -password ******** -tracing

[jcc][time:2017-07-14-01:48:18.880][Thread:main][tracepoint:10] DataSource created. Table size: 1
 [jcc] BEGIN TRACE_XML_CONFIGURATION_FILE
 [jcc] dsdriverConfigFile=null
 [jcc] END TRACE_XML_CONFIGURATION_FILE
 [jcc] BEGIN TRACE_DRIVER_CONFIGURATION
 [jcc] Driver: IBM Data Server Driver for JDBC and SQLJ 4.19.49
 [jcc] Compatible JRE versions: { 1.6, 1.7 }
 [jcc] Target server licensing restrictions: { z/OS: disabled; SQLDS: disabled; iSeries: disabled; DB2 for Unix/Windows: disabled; Cloudscape: enabled; Informix: enabled }
 [jcc] License editions: { O: not found; ZS: not found; IS: not found; AS: not found; EE: not found; PE: not found }
 [jcc] Range checking enabled: true
 [jcc] Bug check level: 0xff
 [jcc] Default fetch size: 64
 [jcc] Default isolation: 2
 [jcc] Collect performance statistics: false
 [jcc] No security manager detected.
 [jcc] Detected local client host: client/ip
 [jcc] Access to package sun.io is permitted by security manager.
 [jcc] JDBC 1 system property jdbc.drivers = null
 [jcc] Java Runtime Environment version 1.8.0
 [jcc] Java Runtime Environment vendor = IBM Corporation
 [jcc] Java vendor URL = http://www.ibm.com/
 [jcc] Java installation directory = /opt/ibm/ibm-java-sdk-8.0-4.5/jre
 [jcc] Java Virtual Machine specification version = 1.8
 [jcc] Java Virtual Machine specification vendor = Oracle Corporation
 [jcc] Java Virtual Machine specification name = Java Virtual Machine Specification
 [jcc] Java Virtual Machine implementation version = 2.8
 [jcc] Java Virtual Machine implementation vendor = IBM Corporation
 [jcc] Java Virtual Machine implementation name = IBM J9 VM
 [jcc] Java Runtime Environment specification version = 1.8
 [jcc] Java Runtime Environment specification vendor = Oracle Corporation
 [jcc] Java Runtime Environment specification name = Java Platform API Specification
 [jcc] Java class format version number = 52.0
 [jcc] Java class path = WEB-INF/lib/db2jcc4-4.19.49.jar
 [jcc] Java native library path = /opt/ibm/ibm-java-sdk-8.0-4.5/jre/lib/amd64/compressedrefs:/opt/ibm/ibm-java-sdk-8.0-4.5/jre/lib/amd64:/usr/lib64:/usr/lib
 [jcc] Path of extension directory or directories = /opt/ibm/ibm-java-sdk-8.0-4.5/jre/lib/ext
 [jcc] Operating system name = Linux
 [jcc] Operating system architecture = amd64
 [jcc] Operating system version = 3.10.0-327.10.1.el7.x86_64
 [jcc] File separator ("/" on UNIX) = /
 [jcc] Path separator (":" on UNIX) = :
 [jcc] User's account name = root
 [jcc] User's home directory = /root
 [jcc] User's current working directory = /tmp
 [jcc] JCC outputDirectory = /tmp
 [jcc] Using global configuration settings:
 [jcc] maxTransportObjects = 1000
 [jcc] Dumping all system properties: { java.vendor=IBM Corporation, sun.java.launcher=SUN_STANDARD, javax.net.ssl.trustStorePassword==******, os.name=Linux, ..., com.ibm.oti.vm.library.version=28, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, file.separator=/, java.specification.name=Java Platform API Specification, com.ibm.packed.version=2, java.class.version=52.0, user.country=US, java.home=/opt/ibm/ibm-java-sdk-8.0-4.5/jre, java.vm.info=JRE 1.8.0 Linux amd64-64 Compressed References 20170419_344392 (JIT enabled, AOT enabled)
 J9VM - R28_20170419_1004_B344392
 JIT - tr.r14.java_20170419_344392
 GC - R28_20170419_1004_B344392_CMPRSS
 J9CL - 20170419_344392, os.version=3.10.0-327.10.1.el7.x86_64, java.awt.fonts=, }
 [jcc] Dumping all file properties: { }
 [jcc] END TRACE_DRIVER_CONFIGURATION
 [jcc] BEGIN TRACE_CONNECTS
 [jcc] Attempting connection to :50443/database
 [jcc] Using properties: { maxStatements=0, currentPackagePath=null, currentLockTimeout=-2147483647, timerLevelForQueryTimeOut=0, optimizationProfileToFlush=null, timeFormat=1, monitorPort=0, sendCharInputsUTF8=0, LOCKSSFU=null, alternateGroupDatabaseName=null, extendedTableInfo=0, sendDataAsIs=false, stripTrailingZerosForDecimalNumbers=0, diagLevelExceptionCode=0, returnAlias=1, supportsAsynchronousXARollback=2, sessionTimeZone=null, pkList=null, atomicMultiRowInsert=0, traceFileCount=2, DEBUG=null, IFX_UPDDESC=1, traceDirectory=null, maxRowsetSize=32767, driverType=4, extendedDiagnosticLevel=240, accountingInterval=null, monitoredDataSourceName=null, concurrentAccessResolution=0, LKNOTIFY=yes, clientProgramName=null, enableAlternateGroupSeamlessACR=false, connectNode=-1, traceFileSize=1048576, progressiveStreaming=0, profileName=null, DBMAXPROC=null, // }
 [jcc] END TRACE_CONNECTS
 [jcc][am] [time:2017-07-14-01:48:18.972][Thread:main][tracepoint:100]Connection com.ibm.db2.jcc.t4.b@980bc6a1 start time: 1499996898972
 [jcc][am] [time:2017-07-14-01:48:18.974][Thread:main][tracepoint:101]securityMechanism applied on connection object=3
 [jcc][t4] [time:2017-07-14-01:48:19.016][Thread:main][tracepoint:111]Connection isClosed: true. getApplicableTimeout (false) returning: 0
 [jcc][t4] [time:2017-07-14-01:48:19.016][Thread:main][tracepoint:111]Connection isClosed: true. getApplicableTimeout (true) returning: 0
 [jcc][t4] [time:2017-07-14-01:48:19.016][Thread:main][tracepoint:316]creating a socket to 192.168.1.24 at 50443
 [jcc][t4] [time:2017-07-14-01:48:20.529][Thread:main][tracepoint:100]OpenSSLAction creating socket with tcipTimeout: 0 and so_timeout: 0
 [jcc][t4] [time:2017-07-14-01:48:20.538][Thread:main][tracepoint:320]acrossAlternateGroup_=false
 [jcc][t4][time:2017-07-14-01:48:20.541][Thread:main][tracepoint:1][Request.flush]
 [jcc][t4] SEND BUFFER: EXCSAT (ASCII) (EBCDIC)
 [jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D 
<REMOVED>
 [jcc][t4]
 [jcc][t4] [time:2017-07-14-01:48:20.596][Thread:main][tracepoint:101]Request flushed.
 [jcc][t4] [time:2017-07-14-01:48:20.596][Thread:main][tracepoint:111]Connection isClosed: true. getApplicableTimeout (true) returning: 0
 [jcc][t4] [time:2017-07-14-01:48:20.596][Thread:main][tracepoint:102]Reply to be filled.

...
 [jcc][t4]
 [jcc][ResultSetMetaData@7f399260] BEGIN TRACE_RESULT_SET_META_DATA
 [jcc][ResultSetMetaData@7f399260] Result set meta data for statement Statement@89b322dd
 [jcc][ResultSetMetaData@7f399260] Number of result set columns: 1
 isDescribed=true[jcc][ResultSetMetaData@7f399260] Column 1: { label=1, name=1, type name=INTEGER, type=4, nullable=0, precision=10, scale=0, schema name=, table name=, writable=false, sqlPrecision=0, sqlScale=0, sqlLength=4, sqlType=496, sqlCcsid=0, sqlArrExtent=0, sqlName=1, sqlLabel=null, sqlUnnamed=1, sqlComment=null, sqludtxType=, sqludtRdb=, sqludtSchema=, sqludtName=, sqlxKeymem=0, sqlxGenerated=0, sqlxParmmode=0, sqlxOptlck=0, sqlxCorname=null, sqlxName=null, sqlxBasename=null, sqlxUpdatable=0, sqlxSchema=null, sqlxRdbnam=DATABASE, internal type=4, is locator parameter=false }
 [jcc][ResultSetMetaData@7f399260]{ sqldHold=1, sqldReturn=0, sqldScroll=0, sqldSensitive=0, sqldFcode=85, sqldKeytype=0, sqldRdbnam=, sqldSchema=null }
 [jcc][ResultSetMetaData@7f399260] END TRACE_RESULT_SET_META_DATA
 [jcc][Time:2017-07-14-01:48:20.685][Thread:main][PreparedStatement@89b322dd]executeQuery () returned com.ibm.db2.jcc.t4.h@5e36998
 [jcc][Thread:main][SystemMonitor:stop] core: 24.900053999999997ms | network: 5.893339999999999ms | server: 1.162ms [STMT@-1984748835]
 [jcc][SystemMonitor:start]
 [jcc][Time:2017-07-14-01:48:20.685][Thread:main][ResultSet@5e36998]close () called
 [jcc][Time:2017-07-14-01:48:20.686][Thread:main][ResultSet@5e36998]closeX (null, com.ibm.db2.jcc.t4.b@980bc6a1) called
 [jcc][t4][time:2017-07-14-01:48:20.686][Thread:main][tracepoint:1][Request.flush]
 [jcc][t4] SEND BUFFER: RDBCMM (ASCII) (EBCDIC)
 [jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
 [jcc][t4]
 [jcc][Connection@980bc6a1] DB2 LUWID: 192.168.0.64.53338.170714014901.0004
 [jcc][Time:2017-07-14-01:48:20.935][Thread:main][Connection@980bc6a1]commit () returned null
 [jcc][Thread:main][SystemMonitor:stop] core: 2.103266ms | network: 1.58358ms | server: 0.013000000000000001ms
 [jcc][Time:2017-07-14-01:48:20.935][Thread:main][Connection@980bc6a1]close () called
 [jcc][Connection@980bc6a1] DB2 LUWID: 192.168.0.64.53338.170714014901.0005
 [jcc][t4] [time:2017-07-14-01:48:20.935][Thread:main][tracepoint:202] closing non-pooled Transport

java -Djavax.net.ssl.trustStore=keystore.jks -Djavax.net.ssl.trustStorePassword=PASSWORD -Djavax.net.ssl.keyStore=keystore.jks -Djavax.net.ssl.keyStorePassword==****** -cp db2jcc4-4.19.49.jar com.ibm.db2.jcc.DB2Jcc -url "jdbc:db2://:50443/DATABASE:sslConnection=true;" -user  -password  -tracing

https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.1.0/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_rjv00004.html

Kafka – Handy Commands

Kafka – Handy Commands

Check Partition Count and Replication Factor
  bin/kafka-topics.sh --zookeeper `hostname --long`:2181 --describe --topic replication
 Topic:replication PartitionCount:20 ReplicationFactor:3 Configs:
  Topic: replication Partition: 0 Leader: 1005 Replicas: 1005,1001,1002 Isr: 1001,1002,1005
  Topic: replication Partition: 1 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1001,1003,1002
  Topic: replication Partition: 2 Leader: 1002 Replicas: 1002,1003,1004 Isr: 1002,1003,1004
  Topic: replication Partition: 3 Leader: 1003 Replicas: 1003,1004,1005 Isr: 1003,1004,1005
  Topic: replication Partition: 4 Leader: 1004 Replicas: 1004,1005,1001 Isr: 1001,1004,1005
  Topic: replication Partition: 5 Leader: 1005 Replicas: 1005,1002,1003 Isr: 1003,1002,1005

Alter Partitions
https://stackoverflow.com/questions/33677871/is-it-possible-to-add-partitions-to-an-existing-topic-in-kafka-0-8-2

 bin/kafka-topics.sh --zookeeper `hostname --long`:2181 --alter --topic file-local --partitions 20
 WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
 Adding partitions succeeded!

Can only be increased

Alter Replication Factor
https://kafka.apache.org/documentation/#basic_ops_increase_replication_factor
https://stackoverflow.com/questions/37960767/how-to-change-the-replicas-of-kafka-topic

Add ACL

[root@kafka-1 kafka-broker]# /usr/iop/current/kafka-broker/bin/kafka-acls.sh --add --allow-host '*' --allow-principal 'User:CN=kafka-1.local,C=US' --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=`hostname --long`:2181 --topic fhir-local --group '*' --operation 'All'
 [2017-07-20 11:30:49,713] WARN read null data from /kafka-acl-changes/acl_changes_0000000420 when processing notification acl_changes_0000000420 (kafka.common.ZkNodeChangeNotificationListener)
 [2017-07-20 11:30:49,717] WARN read null data from /kafka-acl-changes/acl_changes_0000000421 when processing notification acl_changes_0000000421 (kafka.common.ZkNodeChangeNotificationListener)
 [2017-07-20 11:30:49,736] WARN read null data from /kafka-acl-changes/acl_changes_0000000422 when processing notification acl_changes_0000000422 (kafka.common.ZkNodeChangeNotificationListener)
 Adding ACLs for resource `Topic:fhir-local`:
  User:CN=kafka-1.local,C=US has Allow permission for operations: All from hosts: *

Adding ACLs for resource `Group:*`:
  User:CN=kafka-1.local,C=US has Allow permission for operations: All from hosts: *

Chrome Crashes – Rinse and Repeat – Crash again on Startup

I couldn’t launch Chrome to save my bacon. I launched Chrome and it crashes. I decided to launch it in Terminal.

17:16:47-paulbastide@Pauls-MacBook-Pro:/Applications/Google Chrome.app/Contents/MacOS$ /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome 
 crashpad_handler: --database is required
 Try 'crashpad_handler --help' for more information.
 [0721/171806.760082:ERROR:file_io.cc(89)] ReadExactly: expected 8, observed 0
 [0721/171806.761010:ERROR:crash_report_database_mac.mm(93)] mkdir : No such file or directory
 2017-07-21 17:18:07.121 Google Chrome[4144:40131] Errors logged by ksadmin: KSKeyedPersistentStore store directory does not exist. [com.google.UpdateEngine.CommonErrorDomain:501 - '/Library/Google/GoogleSoftwareUpdate/TicketStore' - 'KSKeyedPersistentStore.m:372']
 KSPersistentTicketStore failed to load tickets. (productID: com.google.Chrome) [com.google.UpdateEngine.CoreErrorDomain:1051 - '/Library/Google/GoogleSoftwareUpdate/TicketStore/Keystone.ticketstore'] (KSKeyedPersistentStore store directory does not exist. - '/Library/Google/GoogleSoftwareUpdate/TicketStore' [com.google.UpdateEngine.CommonErrorDomain:501])
 ksadmin cannot access the ticket store:<KSUpdateError:0x1004086d0
 domain="com.google.UpdateEngine.CoreErrorDomain"
 code=1051
 userInfo={
 function = "-[KSProductKeyedStore(ProtectedMethods) errorForStoreError:productID:message:timeoutMessage:]";
 date = 2017-07-21 21:18:07 +0000;
 productids = {(
 "com.google.Chrome"
 )};
 filename = "KSProductKeyedStore.m";
 line = 91;
 NSFilePath = "/Library/Google/GoogleSoftwareUpdate/TicketStore/Keystone.ticketstore";
 NSUnderlyingError = ;
 NSLocalizedDescription = "KSPersistentTicketStore failed to load tickets.";
 }
 >
 Segmentation fault: 11

I launched with –user-data-dir option I found with the chromium website documentation.

/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --user-data-dir=~/chrome

I figured out the issue was a file locked by root.

I changed the ownership to my user with /Library/Google/GoogleSoftwareUpdate/TicketStore

Two Kafka Tips

TO change the heap options in KAFKA
Set on the commandline or edit kafka-env.sh on CDP or IOP
Add
KAFKA_HEAP_OPTS="-Xmx64G -XMs64G"

From IBM, Running with IBM JVM

To resolve this issue:

Edit the <kafka_home>/bin-kafka-run-class.sh file.
Locate the following line:

KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps"

To enable verbose logging, change it to:

KAFKA_GC_LOG_OPTS="-Xverbosegclog:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps"

Save the file.</code>

Kafka Kerberos Debug

I’ve spent a lot of time with Kerberos recently.

If you are debugging kerberos and kafka, try this before starting Kafka add -Dsun.security.krb5.debug=true to the KAFKA_HEAP_OPTS.

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/opt/kafka/security/kafka_server_jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf -Dlog4j.properties=file:///opt/kafka/config/log4j.properties -Dsun.security.krb5.debug=true"

/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties

You get a rich log file

 

root@broker:/# cat zookeeper.log
[2017-05-20 20:25:33,482] INFO Reading configuration from: /opt/kafka/config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,495] INFO Resolved hostname: 0.0.0.0 to address: /0.0.0.0 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2017-05-20 20:25:33,495] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,497] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-05-20 20:25:33,497] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-05-20 20:25:33,497] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-05-20 20:25:33,497] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2017-05-20 20:25:33,512] INFO Reading configuration from: /opt/kafka/config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,512] INFO Resolved hostname: 0.0.0.0 to address: /0.0.0.0 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2017-05-20 20:25:33,513] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,513] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2017-05-20 20:25:33,520] INFO Server environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:host.name=broker.example.local (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.version=1.8.0_131 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/opt/kafka/bin/../libs/argparse4j-0.5.0.jar:/opt/kafka/bin/../libs/connect-api-0.10.1.0.jar:/opt/kafka/bin/../libs/connect-file-0.10.1.0.jar:/opt/kafka/bin/../libs/connect-json-0.10.1.0.jar:/opt/kafka/bin/../libs/connect-runtime-0.10.1.0.jar:/opt/kafka/bin/../libs/guava-18.0.jar:/opt/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/opt/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/opt/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/opt/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/opt/kafka/bin/../libs/jackson-core-2.6.3.jar:/opt/kafka/bin/../libs/jackson-databind-2.6.3.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/opt/kafka/bin/../libs/javassist-3.18.2-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.22.2.jar:/opt/kafka/bin/../libs/jersey-common-2.22.2.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/opt/kafka/bin/../libs/jersey-guava-2.22.2.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/opt/kafka/bin/../libs/jersey-server-2.22.2.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-4.9.jar:/opt/kafka/bin/../libs/kafka-clients-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-tools-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/reflections-0.9.10.jar:/opt/kafka/bin/../libs/rocksdbjni-4.9.0.jar:/opt/kafka/bin/../libs/scala-library-2.11.8.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.21.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.9.jar:/opt/kafka/bin/../libs/zookeeper-3.4.8.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:os.version=4.9.27-moby (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:user.dir=/ (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,527] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,527] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,527] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
Debug is true storeKey true useTicketCache false useKeyTab true doNotPrompt true ticketCache is null isInitiator true KeyTab is /etc/security/keytabs/zookeeper.keytab refreshKrb5Config is true principal is zookeeper/broker.example.local@example.LOCAL tryFirstPass is false useFirstPass is true storePass is false clearPass is false
Refreshing Kerberos configuration
Java config name: /etc/krb5.conf
Loaded from Java config

KdcAccessibility: reset
KdcAccessibility: reset
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 107; type: 18
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 17
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 99; type: 16
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 23
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 107; type: 18
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 17
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 99; type: 16
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 23
Looking for keys for: zookeeper/broker.example.local@example.LOCAL
Added key: 23version: 2
Added key: 16version: 2
Added key: 17version: 2
Added key: 18version: 2
Added key: 23version: 1
Added key: 16version: 1
Added key: 17version: 1
Added key: 18version: 1
Looking for keys for: zookeeper/broker.example.local@example.LOCAL
Added key: 23version: 2
Added key: 16version: 2
Added key: 17version: 2
Added key: 18version: 2
Added key: 23version: 1
Added key: 16version: 1
Added key: 17version: 1
Added key: 18version: 1
default etypes for default_tkt_enctypes: 18 17 16.
KrbAsReq creating message
KrbKdcReq send: kdc=broker.example.local UDP:88, timeout=30000, number of retries =3, #bytes=188
KDCCommunication: kdc=broker.example.local UDP:88, timeout=30000,Attempt =1, #bytes=188
KrbKdcReq send: #bytes read=804
KdcAccessibility: remove broker.example.local
Looking for keys for: zookeeper/broker.example.local@example.LOCAL
Added key: 23version: 2
Added key: 16version: 2
Added key: 17version: 2
Added key: 18version: 2
Added key: 23version: 1
Added key: 16version: 1
Added key: 17version: 1
Added key: 18version: 1
EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
KrbAsRep cons in KrbAsReq.getReply zookeeper/broker.example.local
principal is zookeeper/broker.example.local@example.LOCAL
Will use keytab
Commit Succeeded

[2017-05-20 20:25:33,649] INFO successfully logged in. (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,651] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,656] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2017-05-20 20:25:33,666] INFO TGT valid starting at: Sat May 20 20:25:33 UTC 2017 (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,666] INFO TGT expires: Sun May 21 20:25:33 UTC 2017 (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,666] INFO TGT refresh sleeping until: Sun May 21 16:02:45 UTC 2017 (org.apache.zookeeper.Login)
root@broker:/#