Kafka Kerberos Debug

I’ve spent a lot of time with Kerberos recently.

If you are debugging kerberos and kafka, try this before starting Kafka add -Dsun.security.krb5.debug=true to the KAFKA_HEAP_OPTS.

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/opt/kafka/security/kafka_server_jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf -Dlog4j.properties=file:///opt/kafka/config/log4j.properties -Dsun.security.krb5.debug=true"

/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties

You get a rich log file

 

root@broker:/# cat zookeeper.log
[2017-05-20 20:25:33,482] INFO Reading configuration from: /opt/kafka/config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,495] INFO Resolved hostname: 0.0.0.0 to address: /0.0.0.0 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2017-05-20 20:25:33,495] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,497] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-05-20 20:25:33,497] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-05-20 20:25:33,497] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-05-20 20:25:33,497] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2017-05-20 20:25:33,512] INFO Reading configuration from: /opt/kafka/config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,512] INFO Resolved hostname: 0.0.0.0 to address: /0.0.0.0 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2017-05-20 20:25:33,513] ERROR Invalid configuration, only one server specified (ignoring) (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-05-20 20:25:33,513] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2017-05-20 20:25:33,520] INFO Server environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:host.name=broker.example.local (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.version=1.8.0_131 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/opt/kafka/bin/../libs/argparse4j-0.5.0.jar:/opt/kafka/bin/../libs/connect-api-0.10.1.0.jar:/opt/kafka/bin/../libs/connect-file-0.10.1.0.jar:/opt/kafka/bin/../libs/connect-json-0.10.1.0.jar:/opt/kafka/bin/../libs/connect-runtime-0.10.1.0.jar:/opt/kafka/bin/../libs/guava-18.0.jar:/opt/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/opt/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/opt/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/opt/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/opt/kafka/bin/../libs/jackson-core-2.6.3.jar:/opt/kafka/bin/../libs/jackson-databind-2.6.3.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/opt/kafka/bin/../libs/javassist-3.18.2-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.22.2.jar:/opt/kafka/bin/../libs/jersey-common-2.22.2.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/opt/kafka/bin/../libs/jersey-guava-2.22.2.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/opt/kafka/bin/../libs/jersey-server-2.22.2.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-4.9.jar:/opt/kafka/bin/../libs/kafka-clients-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka-tools-0.10.1.0.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/reflections-0.9.10.jar:/opt/kafka/bin/../libs/rocksdbjni-4.9.0.jar:/opt/kafka/bin/../libs/scala-library-2.11.8.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.21.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.9.jar:/opt/kafka/bin/../libs/zookeeper-3.4.8.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:os.version=4.9.27-moby (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,520] INFO Server environment:user.dir=/ (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,527] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,527] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-05-20 20:25:33,527] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
Debug is true storeKey true useTicketCache false useKeyTab true doNotPrompt true ticketCache is null isInitiator true KeyTab is /etc/security/keytabs/zookeeper.keytab refreshKrb5Config is true principal is zookeeper/broker.example.local@example.LOCAL tryFirstPass is false useFirstPass is true storePass is false clearPass is false
Refreshing Kerberos configuration
Java config name: /etc/krb5.conf
Loaded from Java config

KdcAccessibility: reset
KdcAccessibility: reset
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 107; type: 18
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 17
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 99; type: 16
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 23
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 107; type: 18
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 17
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 99; type: 16
KeyTabInputStream, readName(): example.LOCAL
KeyTabInputStream, readName(): zookeeper
KeyTabInputStream, readName(): broker.example.local
KeyTab: load() entry length: 91; type: 23
Looking for keys for: zookeeper/broker.example.local@example.LOCAL
Added key: 23version: 2
Added key: 16version: 2
Added key: 17version: 2
Added key: 18version: 2
Added key: 23version: 1
Added key: 16version: 1
Added key: 17version: 1
Added key: 18version: 1
Looking for keys for: zookeeper/broker.example.local@example.LOCAL
Added key: 23version: 2
Added key: 16version: 2
Added key: 17version: 2
Added key: 18version: 2
Added key: 23version: 1
Added key: 16version: 1
Added key: 17version: 1
Added key: 18version: 1
default etypes for default_tkt_enctypes: 18 17 16.
KrbAsReq creating message
KrbKdcReq send: kdc=broker.example.local UDP:88, timeout=30000, number of retries =3, #bytes=188
KDCCommunication: kdc=broker.example.local UDP:88, timeout=30000,Attempt =1, #bytes=188
KrbKdcReq send: #bytes read=804
KdcAccessibility: remove broker.example.local
Looking for keys for: zookeeper/broker.example.local@example.LOCAL
Added key: 23version: 2
Added key: 16version: 2
Added key: 17version: 2
Added key: 18version: 2
Added key: 23version: 1
Added key: 16version: 1
Added key: 17version: 1
Added key: 18version: 1
EType: sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType
KrbAsRep cons in KrbAsReq.getReply zookeeper/broker.example.local
principal is zookeeper/broker.example.local@example.LOCAL
Will use keytab
Commit Succeeded

[2017-05-20 20:25:33,649] INFO successfully logged in. (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,651] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,656] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2017-05-20 20:25:33,666] INFO TGT valid starting at: Sat May 20 20:25:33 UTC 2017 (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,666] INFO TGT expires: Sun May 21 20:25:33 UTC 2017 (org.apache.zookeeper.Login)
[2017-05-20 20:25:33,666] INFO TGT refresh sleeping until: Sun May 21 16:02:45 UTC 2017 (org.apache.zookeeper.Login)
root@broker:/#

Logging using PQS lower than 4.6

To logging PQS, you can log using NGINX as a proxy that records the HTTP Request Body and the timing.

On your Centos/RHEL machine, switch to root privileges
sudo -s

Add nginx.repo

Install nginx packages, and you can setup a proxy
yum install nginx.x86_64 -y

Create the nginx configuration so you have a log_format that outputs the body ($request_body) and time ($request_time)

Create the default server configuration, and set the proxy_pass to point to your PQS/phoenix query server

Reload nginx
nginx -s reload

Tail the log file to see the data called.
[root@data-com-4 pbastide]# tail -f /var/log/nginx/request-body.log

https://www.nginx.com/resources/admin-guide/reverse-proxy/
https://www.nginx.com/resources/wiki/start/topics/tutorials/install/
https://www.nginx.com/resources/admin-guide/logging-and-monitoring/

Logging using PQS 4.7 and Higher

1 – Drop this into /tmp and chmod 755 on the file
In practice, I’d probably put it in /opt/phoenix/config or /etc/hbase/phoenix

2 – Edit the queryserver.py (/opt/phoenix/bin/queryserver.py )
Around line 128 (hopefully not changed much from 4.7 to 4.8), add this
" -Dlog4j.configuration=file:///tmp/log4j.properties" + \

3 – Restart phoenix
/opt/phoenix/bin/queryserver.py stop
/opt/phoenix/bin/queryserver.py start

4 – scan the file
grep -i XYZ.dim /tmp/phoenix-query.log

2017-01-05 13:40:39,227 [qtp-1461017990-33 - /] TRACE org.apache.calcite.avatica.remote.ProtobufTranslationImpl - Serializing response 'results { connection_id: "0deb2c47-53e5-4846-b22b-ba3faa0bc37a" statement_id: 1 own_statement: true signature { columns { searchable: true display_size: 32 label: "ID" column_name: "ID" precision: 32 table_name: "XYZ.DIM" read_only: true column_class_name: "java.lang.String" type { id: 12 name: "VARCHAR" rep: STRING } } sql: "select ID from XYZ.DIM WHERE VLD_TO_TS IS NULL LIMIT 1" cursor_factory { style: LIST } } first_frame { done: true rows { value { scalar_value { type: STRING string_value: "00025a56f1084f0584a50f7cf9dc4bfc" } } } } update_count: 18446744073709551615 metadata { server_address: "demo.net:80" } } metadata { server_address: "demo.net:80" }'

You can then correlated on connection_id and analyze the log file for data.

Automating Integration Tests with WebSphere Liberty and Maven

I spent some hours to figure out how to deploy my webapp to the Liberty from my Maven project.

I looked into the feature restConnector-1.0 . restConnector provides a REST based JMX MBeans interface to WebSphere Liberty.  You add  <feature>restConnector-1.0</feature> to your server, and make sure you have a user set in the admin role.

<administrator-role>
 <user>admin</user>
</administrator-role>

You need to add write access for the administrator-role user.  You’ll need to specify a relative path (use the Liberty Directory Locations) or use the absolute path (it’s a bit more complicated).

 <remoteFileAccess>
   <writeDir>${server.output.dir}/dropins</writeDir>
 </remoteFileAccess>

You can login and see the various APIs [ https://localhost:9443/IBMJMXConnectorREST/api ] or check the documentation [API1][API2][MBean]  .  You’ll want to check any upload with https://localhost:9443/IBMJMXConnectorREST/file/status

I used it with maven-antrun-plugin to execute against curl.

<plugin>
 <artifactId>maven-antrun-plugin</artifactId>
 <version>1.8</version>
 <executions>
 <execution>
 <id>upload-pre-integration</id>
 <phase>pre-integration-test</phase>
 <goals>
 <goal>run</goal>
 </goals>
 <configuration>
 <target name="post-war-to-server" unless="maven.test.skip">
 <!-- Remove War File from the Server -->
 <exec executable="curl">
 <arg
 line="-u 'admin:dmin' -H 'Content-Type: application/war' -X DELETE 'https://localhost:9443/IBMJMXConnectorREST/file/%24%7Bserver.output.dir%7D%2Fdropins%2Ftransformation.war?recursiveDelete=true' 
 --insecure -v" />
 </exec>
 <!-- Should check that this is a 204 -->
 
 <!-- Post the War file to the Server -->
 <!-- Expand the War file on the Server -->
 <exec executable="curl">
 <arg
 line="-u 'admin:admin' -H 'Content-Type: application/war' -X POST 'https://localhost:9443/IBMJMXConnectorREST/file/%24%7Bserver.output.dir%7D%2Fdropins%2Ftransformation.war?expandOnCompletion=true' 
 --data-binary '@target/example.war' --insecure -v" />
 </exec>
 
 </target>

 </configuration>
 </execution>
 </executions>
 </plugin>

Execute maven with mvn clean integration-test package in order to trigger the deployment during the integration-test.

Hamcrest Matchers Issue

I’ve been using Rest-Assured as a simple Integration Test tool for the last few months, and suddenly I started seeing this issue.

java.lang.SecurityException: class "org.hamcrest.Matchers"'s signer information does not match signer information of other classes in the same package
at java.lang.ClassLoader.checkCerts(ClassLoader.java:895)
at java.lang.ClassLoader.preDefineClass(ClassLoader.java:665)
at java.lang.ClassLoader.defineClass(ClassLoader.java:758)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at com.test.restassured.TestCode.testaCreateTenant(TestCode.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)

I wiped my .m2 repository of org.hamcrest.Matchers jar files, and rebuilt the maven repository (mvn clean package install -DskipTests).   The problem remained.

I ran across a simple for my code (on the rest-assured site thanks to johanhaleby)

I changed the code to

import static org.hamcrest.CoreMatchers.equalsTo;

Now the code passes the tests without a problem