Logging using PQS lower than 4.6

To logging PQS, you can log using NGINX as a proxy that records the HTTP Request Body and the timing.

On your Centos/RHEL machine, switch to root privileges
sudo -s

Add nginx.repo

Install nginx packages, and you can setup a proxy
yum install nginx.x86_64 -y

Create the nginx configuration so you have a log_format that outputs the body ($request_body) and time ($request_time)

Create the default server configuration, and set the proxy_pass to point to your PQS/phoenix query server

Reload nginx
nginx -s reload

Tail the log file to see the data called.
[root@data-com-4 pbastide]# tail -f /var/log/nginx/request-body.log


Logging using PQS 4.7 and Higher

1 – Drop this into /tmp and chmod 755 on the file
In practice, I’d probably put it in /opt/phoenix/config or /etc/hbase/phoenix

2 – Edit the queryserver.py (/opt/phoenix/bin/queryserver.py )
Around line 128 (hopefully not changed much from 4.7 to 4.8), add this
" -Dlog4j.configuration=file:///tmp/log4j.properties" + \

3 – Restart phoenix
/opt/phoenix/bin/queryserver.py stop
/opt/phoenix/bin/queryserver.py start

4 – scan the file
grep -i XYZ.dim /tmp/phoenix-query.log

2017-01-05 13:40:39,227 [qtp-1461017990-33 - /] TRACE org.apache.calcite.avatica.remote.ProtobufTranslationImpl - Serializing response 'results { connection_id: "0deb2c47-53e5-4846-b22b-ba3faa0bc37a" statement_id: 1 own_statement: true signature { columns { searchable: true display_size: 32 label: "ID" column_name: "ID" precision: 32 table_name: "XYZ.DIM" read_only: true column_class_name: "java.lang.String" type { id: 12 name: "VARCHAR" rep: STRING } } sql: "select ID from XYZ.DIM WHERE VLD_TO_TS IS NULL LIMIT 1" cursor_factory { style: LIST } } first_frame { done: true rows { value { scalar_value { type: STRING string_value: "00025a56f1084f0584a50f7cf9dc4bfc" } } } } update_count: 18446744073709551615 metadata { server_address: "demo.net:80" } } metadata { server_address: "demo.net:80" }'

You can then correlated on connection_id and analyze the log file for data.