Giter Site home page Giter Site logo

hi,i just follow you steps to compile ,install indexR .after finished, i just insert data by hive,and then i use drill to scan data,no data return.do you know why? the log has no error about indexr HOT 10 CLOSED

shunfei avatar shunfei commented on August 12, 2024
hi,i just follow you steps to compile ,install indexR .after finished, i just insert data by hive,and then i use drill to scan data,no data return.do you know why? the log has no error

from indexr.

Comments (10)

flowbehappy avatar flowbehappy commented on August 12, 2024

@zhang110912

  • Can you query your data from Hive? Any errors?
  • Remember to run the segments notify script after updated your segment by Hive or manually. Otherwise IndexR won't be able to see the update. e.g. indexr-tool/bin/tools.sh -cmd notifysu -t tableName
  • Any other errors in drillbit.log while running queries from drill console ? The message subScan fragment 0 have not record reader to assign only warn you that no more rows to assign to one of the reader.

from indexr.

aerobe avatar aerobe commented on August 12, 2024

@flowbehappy the account zhang110912 is others i just use temporary.
i can query data from hive ,no error.
yes,after inserted data by hive,i always use the tool.sh to sync data,
no error in drillbit.log,no error print in drill console

extra info : the drill version is 1.9.0
the hive version is 1.2.1
the jdk version is 1.8+

from indexr.

flowbehappy avatar flowbehappy commented on August 12, 2024

@aerobe

  • Have you specified the correct LOCATION in Hive table? e.g. LOCATION '/indexr/segment/test'
  • Check your indexr.config.properties, especially the indexr.fs.connection option. e.g. If your hdfs filesystem in core-site.xml like
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
  </property>

Then you should specified properties in indexr.config.properties like:

indexr.fs.connection=hdfs://mycluster
indexr.fs.data.root=/indexr

And make sure you have synchronized among all drillbit nodes and indexr-tool.

from indexr.

aerobe avatar aerobe commented on August 12, 2024

@flowbehappy yes ,i had configured the location of hdfs , and i make sure the conf among all drillbit nodes are same ,the indexr-tool also.

from indexr.

flowbehappy avatar flowbehappy commented on August 12, 2024

@aerobe

Can you post your hive create table schema, IndexR table schema, and the indexr.config.properties here ?

And print all files under table location path. e.g. hdfs dfs -ls -R /indexr/segment/tableName/.
Set the io.indexr log level to debug, and restart drillbit. Check if any segments in your table being loaded.

from indexr.

aerobe avatar aerobe commented on August 12, 2024

@flowbehappy

the hive create table schema:

hive

the indexr.config .properties is :

indexr.zk.addr=host-170-12:2181,host-170-13:2181,host-170-14:2181
indexr.zk.root =/indexr
indexr.control.port=9235
indexr.fs.connection=hdfs://host-170-12:8022/
indexr.fs.data.root=/indexr
indexr.fs.local.data.root=/data/indexr

the schema is :

{
"schema":{
"columns":
[
{"name": "date", "dataType": "int"},
{"name": "d1", "dataType": "string"},
{"name": "m1", "dataType": "int"},
{"name": "m2", "dataType": "long"},
{"name": "m3", "dataType": "float"},
{"name": "m4", "dataType": "double"}
]
}
}

the detail of drill query:

default

the location of table test:

1

the log is :

Apache Drill
Query
Profiles
Storage
Metrics
Threads
Logs
Options
Documentation
back
drillbit.log (last 10,000 lines)
Download Full Log

Tue Jan 17 08:54:47 CST 2017 Terminating drillbit pid 24818
Tue Jan 17 08:55:04 CST 2017 Starting drillbit on host-170-12
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 514550
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 102400
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2017-01-17 08:55:05,214 [main] INFO o.a.drill.exec.util.GuavaPatcher - Google's Stopwatch patched for old HBase Guava version.
2017-01-17 08:55:05,230 [main] INFO o.a.drill.exec.util.GuavaPatcher - Google's Closeables patched for old HBase Guava version.
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: host.name=host-170-12
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: java.version=1.8.0_60
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: java.vendor=Oracle Corporation
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: java.home=/usr/java/jdk1.8.0_60/jre
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: java.class.path=/home/zhou/apache-drill-1.9.0/conf:/home/zhou/apache-drill-1.9.0/jars/drill-java-exec-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-rpc-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/vector-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-hive-exec-shaded-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/tpch-sample-data-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hbase-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-jdbc-storage-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-kudu-storage-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-memory-base-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-protocol-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-mongo-storage-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-jdbc-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hive-core-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-gis-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-logical-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/drill-indexr-storage-1.9.0.jar:/home/zhou/apache-drill-1.9.0/jars/ext/zookeeper-3.4.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-codec-1.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/calcite-linq4j-1.4.0-drill-r19.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/datanucleus-api-jdo-3.2.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-hadoop2-compat-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parquet-hadoop-1.8.1-drill-r0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jcommander-1.30.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/xz-1.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-http-9.1.5.v20140505.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/kafka_2.10-0.8.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/kryo-2.21.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-mapreduce-client-shuffle-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/validation-api-1.1.0.Final.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/metrics-servlets-3.0.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-lang-2.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-xml-9.1.1.v20140108.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/avro-ipc-1.7.7-tests.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-digester-1.8.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/derby-10.10.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parquet-jackson-1.8.1-drill-r0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/kudu-client-0.6.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.sulky.io-0.9.17.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/asm-debug-all-5.0.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-util-9.1.5.v20140505.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/httpclient-4.2.5.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/metrics-jvm-3.0.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-pool2-2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/snappy-java-1.1.1.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/curator-recipes-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/aws-java-sdk-1.7.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parser-core-2.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/interface-annotations-0.6.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-yarn-server-common-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-dbcp-1.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-security-9.1.5.v20140505.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/httpdlog-parser-2.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.data.converter-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-core-asl-1.9.13.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/apacheds-i18n-2.0.0-M15.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/univocity-parsers-1.3.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/config-1.0.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/avro-mapred-1.7.7.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/serializer-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parquet-column-1.8.1-drill-r0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/velocity-1.7.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/metrics-json-3.0.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/msgpack-0.6.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-server-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/slf4j-api-1.7.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/freemarker-2.3.21.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-compiler-jdk-2.7.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/libfb303-0.9.2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-httpclient-3.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/esri-geometry-api-1.2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-yarn-client-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/stringtemplate-3.2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jta-1.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-configuration-1.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/javax.inject-1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/joda-time-2.9.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/xalan-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-annotations-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/gson-2.2.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-io-2.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-jaxrs-1.9.13.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.data.logging-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/mockito-core-1.9.5.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/stax-api-1.0-2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-common-1.1.3-tests.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jcodings-1.0.8.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hppc-0.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-common-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parquet-common-1.8.1-drill-r0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-databind-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/netty-handler-4.0.27.Final.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/netty-codec-4.0.27.Final.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-auth-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-protocol-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-mapreduce-client-core-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/disruptor-3.3.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jcl-over-slf4j-1.7.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/janino-2.7.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jsp-api-2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/xmlenc-0.52.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-procedure-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-mapreduce-client-app-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/sqlline-1.1.9-drill-r7.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-prefix-tree-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-lang3-3.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.data.eventsource-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-compiler-2.7.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/antlr-2.7.7.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/paranamer-2.5.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hive-hbase-handler-1.2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/netty-transport-4.0.27.Final.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-hdfs-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/metrics-core-3.0.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-jaxrs-base-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-pool-1.5.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/spark-unsafe_2.10-1.6.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-module-jaxb-annotations-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-mapper-asl-1.9.11.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hive-metastore-1.2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/indexr-query-opt-0.1.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-yarn-common-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/log4j-over-slf4j-1.7.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jsch-0.1.42.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.data.logging.protobuf-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parquet-generator-1.8.1-drill-r0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/protostuff-api-1.0.8.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/calcite-avatica-1.4.0-drill-r19.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/calcite-core-1.4.0-drill-r19.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jline-2.10.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-server-9.1.5.v20140505.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parquet-format-2.3.0-incubating.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/netty-buffer-4.0.27.Final.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/javassist-3.16.1-GA.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-cli-1.2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jdk.tools-1.7.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/findbugs-annotations-1.3.9-1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/joni-2.1.2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-io-9.1.5.v20140505.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/parquet-encoding-1.8.1-drill-r0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/scala-library-2.10.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/antlr-runtime-3.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/leveldbjni-all-1.8.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/api-util-1.0.0-M20.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.logback.appender.multiplex-core-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/json-20090211.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-compress-1.4.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.sulky.formatting-0.9.17.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/indexr-segment-0.1.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.sulky.codec-0.9.17.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/netty-common-4.0.27.Final.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-mapreduce-client-jobclient-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jdo-api-3.0.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/guava-18.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/indexr-common-0.1.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-collections-3.2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-hadoop-compat-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hive-contrib-1.2.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-net-3.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/xml-apis-1.4.01.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jdiff-1.0.9.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/protobuf-java-2.5.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/libthrift-0.9.2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/eigenbase-properties-1.1.5.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/javassist-3.12.1.GA.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/protostuff-json-1.0.8.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/bcpkix-jdk15on-1.52.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/lz4-1.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/curator-client-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hamcrest-core-1.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.logback.classic-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jpam-1.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-webapp-9.1.1.v20140108.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/avro-ipc-1.7.7.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-client-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-client-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/metrics-core-2.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-beanutils-1.7.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jsr305-3.0.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jul-to-slf4j-1.7.6.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hbase-annotations-1.1.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-continuation-9.1.1.v20140108.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/netty-3.7.0.Final.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-jaxrs-json-provider-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-beanutils-core-1.8.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-common-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-module-afterburner-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/dom4j-1.6.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-math-2.2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.sender-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-annotations-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/avro-1.7.7.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.logback.appender.multiplex-classic-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/foodmart-data-json-0.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/datanucleus-core-3.2.10.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/curator-x-discovery-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/htrace-core-3.1.0-incubating.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/async-1.4.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/zkclient-0.3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/bcprov-jdk15on-1.52.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/apacheds-kerberos-codec-2.0.0-M15.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/curator-framework-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/kafka-clients-0.8.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/httpcore-4.2.4.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/snappy-java-1.0.5-M3.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/commons-math3-3.1.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/mongo-java-driver-3.0.2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jackson-core-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/bonecp-0.8.0.RELEASE.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/indexr-server-0.1.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/protostuff-core-1.0.8.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/de.huxhorn.lilith.logback.converter-classic-0.9.44.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-servlets-9.1.5.v20140505.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/datanucleus-rdbms-3.2.9.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/objenesis-1.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jetty-servlet-9.1.5.v20140505.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-mapreduce-client-common-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/metrics-healthchecks-3.0.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/api-asn1-api-1.0.0-M20.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/jopt-simple-3.2.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/json-simple-1.1.1.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/xercesImpl-2.11.0.jar:/home/zhou/apache-drill-1.9.0/jars/3rdparty/hadoop-aws-2.7.1.jar:/home/zhou/apache-drill-1.9.0/jars/classb/javax.ws.rs-api-2.0.jar:/home/zhou/apache-drill-1.9.0/jars/classb/javax.annotation-api-1.2.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-container-jetty-servlet-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-common-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/activation-1.1.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-container-jetty-http-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/hk2-utils-2.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/classb/javax.inject-2.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/classb/reflections-0.9.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-server-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/osgi-resource-locator-1.0.1.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-client-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/logback-classic-1.0.13.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-mvc-freemarker-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/hk2-api-2.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jetty-6.1.26.jar:/home/zhou/apache-drill-1.9.0/jars/classb/logback-core-1.0.13.jar:/home/zhou/apache-drill-1.9.0/jars/classb/mimepull-1.9.3.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jetty-util-6.1.26.jar:/home/zhou/apache-drill-1.9.0/jars/classb/hk2-locator-2.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-guava-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-container-servlet-core-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/codemodel-2.6.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-container-servlet-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-mvc-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/aopalliance-repackaged-2.2.0.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jersey-media-multipart-2.8.jar:/home/zhou/apache-drill-1.9.0/jars/classb/javax.servlet-api-3.1.0.jar:/home/zhou/apache-drill-1.9.0/jars/classb/jaxb-api-2.2.2.jar
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: java.library.path=:/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: java.io.tmpdir=/tmp
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: java.compiler=
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: os.name=Linux
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: os.arch=amd64
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: os.version=2.6.32-573.el6.x86_64
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: user.name=root
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: user.home=/root
2017-01-17 08:55:05,232 [main] INFO o.apache.drill.exec.server.Drillbit - Drillbit environment: user.dir=/home/zhou/apache-drill-1.9.0
2017-01-17 08:55:05,233 [main] DEBUG o.a.drill.exec.server.StartupOptions - Parsing arguments.
2017-01-17 08:55:05,337 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - Scanning classpath for resources with pathname "drill-module.conf".
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-java-exec-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-hive-exec-shaded-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hbase-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-jdbc-storage-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-kudu-storage-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-memory-base-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-mongo-storage-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hive-core-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-gis-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-logical-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,341 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-indexr-storage-1.9.0.jar!/drill-module.conf.
2017-01-17 08:55:05,363 [main] INFO o.a.drill.common.config.DrillConfig - Configuration and plugin file(s) identified in 102ms.
Base Configuration:
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar!/drill-default.conf

Intermediate Configuration and Plugin files, in order of precedence:
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-gis-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-mongo-storage-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-hive-exec-shaded-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-kudu-storage-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-jdbc-storage-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-logical-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-memory-base-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-java-exec-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hbase-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-indexr-storage-1.9.0.jar!/drill-module.conf
- jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hive-core-1.9.0.jar!/drill-module.conf

Override File: file:/home/zhou/apache-drill-1.9.0/conf/drill-override.conf

2017-01-17 08:55:05,367 [main] DEBUG o.a.drill.common.config.DrillConfig - Setting up DrillConfig object.
2017-01-17 08:55:05,381 [main] DEBUG o.a.drill.common.config.DrillConfig - DrillConfig object initialized.
2017-01-17 08:55:05,381 [main] DEBUG o.apache.drill.exec.server.Drillbit - Starting new Drillbit.
2017-01-17 08:55:05,613 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - Scanning classpath for resources with pathname "META-INF/drill-module-scan/registry.json".
2017-01-17 08:55:05,613 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-java-exec-1.9.0.jar!/META-INF/drill-module-scan/registry.json.
2017-01-17 08:55:05,613 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar!/META-INF/drill-module-scan/registry.json.
2017-01-17 08:55:05,613 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-logical-1.9.0.jar!/META-INF/drill-module-scan/registry.json.
2017-01-17 08:55:05,740 [main] INFO o.a.d.common.scanner.BuildTimeScan - Loaded prescanned packages [org.apache.drill.storage, org.apache.drill.exec.expr, org.apache.drill.exec.physical, org.apache.drill.exec.store, org.apache.drill.exec.rpc.user.security, org.apache.drill.exec.store.mock, org.apache.drill.common.logical, org.apache.drill.exec.store.mock, org.apache.drill.common.logical, org.apache.drill.storage, org.apache.drill.exec.store.mock, org.apache.drill.common.logical] from locations [jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-java-exec-1.9.0.jar!/META-INF/drill-module-scan/registry.json, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar!/META-INF/drill-module-scan/registry.json, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-logical-1.9.0.jar!/META-INF/drill-module-scan/registry.json]
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - Scanning classpath for resources with pathname "drill-module.conf".
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-java-exec-1.9.0.jar!/.
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-hive-exec-shaded-1.9.0.jar!/.
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hbase-1.9.0.jar!/.
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-jdbc-storage-1.9.0.jar!/.
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-kudu-storage-1.9.0.jar!/.
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar!/.
2017-01-17 08:55:05,741 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-memory-base-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-mongo-storage-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hive-core-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-gis-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-logical-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-indexr-storage-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - Scanning classpath for resources with pathname "META-INF/drill-module-scan/registry.json".
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-java-exec-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-common-1.9.0.jar!/.
2017-01-17 08:55:05,742 [main] DEBUG o.a.d.c.scanner.ClassPathScanner - - collected resource's classpath root URL jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-logical-1.9.0.jar!/.
2017-01-17 08:55:06,544 [main] INFO o.a.d.c.scanner.ClassPathScanner - Scanning packages [org.apache.drill.exec.store.mongo, org.apache.hadoop.hive, org.apache.drill.exec.store.kudu, org.apache.drill.exec.store.mock, org.apache.drill.common.logical, org.apache.drill.exec.store.jdbc, org.apache.drill.exec.expr, org.apache.drill.exec.physical, org.apache.drill.exec.store, org.apache.drill.exec.rpc.user.security, org.apache.drill.exec.store.hbase, org.apache.drill.exec.expr.fn.impl.conv, org.apache.drill.exec.store.indexr, org.apache.hadoop.hive, org.apache.drill.exec.fn.hive] in locations [jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-jdbc-storage-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-hive-exec-shaded-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-kudu-storage-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hbase-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-indexr-storage-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hive-core-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-gis-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-memory-base-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-mongo-storage-1.9.0.jar!/] took 799ms
2017-01-17 08:55:06,547 [main] DEBUG o.apache.drill.exec.server.Drillbit - Construction started.
2017-01-17 08:55:06,834 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Connect host-170-12:2181,host-170-13:2181,host-170-14:2181, zkRoot drill, clusterId: drillbits_zhou
2017-01-17 08:55:06,905 [main] INFO o.a.d.e.s.s.PersistentStoreRegistry - Using the configured PStoreProvider class: 'org.apache.drill.exec.store.sys.store.provider.ZookeeperPersistentStoreProvider'.
2017-01-17 08:55:07,310 [main] INFO o.apache.drill.exec.server.Drillbit - Construction completed (762 ms).
2017-01-17 08:55:07,310 [main] DEBUG o.apache.drill.exec.server.Drillbit - Startup begun.
2017-01-17 08:55:07,310 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Starting ZKClusterCoordination.
2017-01-17 08:55:07,333 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 0 total bits. New active drillbits:

2017-01-17 08:55:07,367 [main] DEBUG o.a.drill.exec.rpc.user.UserServer - Server of type UserServer started on port 31010.
2017-01-17 08:55:07,381 [main] DEBUG o.a.d.exec.rpc.control.ControlServer - Server of type ControlServer started on port 31011.
2017-01-17 08:55:07,387 [main] DEBUG o.a.drill.exec.rpc.data.DataServer - Server of type DataServer started on port 31012.
2017-01-17 08:55:07,401 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 16 classes for org.apache.drill.common.logical.data.LogicalOperator took 5ms
2017-01-17 08:55:07,401 [main] DEBUG o.a.d.c.l.data.LogicalOperatorBase - Found 16 logical operator classes: [class org.apache.drill.common.logical.data.Project, class org.apache.drill.common.logical.data.Writer, class org.apache.drill.common.logical.data.Sequence, class org.apache.drill.common.logical.data.Transform, class org.apache.drill.common.logical.data.Order, class org.apache.drill.common.logical.data.Values, class org.apache.drill.common.logical.data.Filter, class org.apache.drill.common.logical.data.RunningAggregate, class org.apache.drill.common.logical.data.Limit, class org.apache.drill.common.logical.data.Join, class org.apache.drill.common.logical.data.Store, class org.apache.drill.common.logical.data.GroupingAggregate, class org.apache.drill.common.logical.data.Window, class org.apache.drill.common.logical.data.Flatten, class org.apache.drill.common.logical.data.Scan, class org.apache.drill.common.logical.data.Union].
2017-01-17 08:55:07,406 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 11 classes for org.apache.drill.common.logical.StoragePluginConfig took 3ms
2017-01-17 08:55:07,406 [main] DEBUG o.a.d.c.l.StoragePluginConfigBase - Found 11 logical operator classes: [class org.apache.drill.exec.store.dfs.FileSystemConfig, class org.apache.drill.exec.store.sys.SystemTablePluginConfig, class org.apache.drill.exec.store.NamedStoragePluginConfig, class org.apache.drill.exec.store.hive.HiveStoragePluginConfig, class org.apache.drill.exec.store.kudu.KuduStoragePluginConfig, class org.apache.drill.exec.store.indexr.IndexRStoragePluginConfig, class org.apache.drill.exec.store.mock.MockStorageEngineConfig, class org.apache.drill.exec.store.hbase.HBaseStoragePluginConfig, class org.apache.drill.exec.store.jdbc.JdbcStorageConfig, class org.apache.drill.exec.store.ischema.InfoSchemaConfig, class org.apache.drill.exec.store.mongo.MongoStoragePluginConfig].
2017-01-17 08:55:07,408 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 1ms
2017-01-17 08:55:07,409 [main] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:07,482 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 65 classes for org.apache.drill.exec.physical.base.PhysicalOperator took 39ms
2017-01-17 08:55:07,482 [main] DEBUG o.a.d.e.p.base.PhysicalOperatorUtil - Found 65 physical operator classes: [class org.apache.drill.exec.physical.config.UnorderedMuxExchange, class org.apache.drill.exec.physical.config.UnionExchange, class org.apache.drill.exec.store.jdbc.JdbcSubScan, class org.apache.drill.exec.physical.config.FlattenPOP, class org.apache.drill.exec.store.hive.HiveSubScan, class org.apache.drill.exec.store.ischema.InfoSchemaSubScan, class org.apache.drill.exec.physical.config.HashToRandomExchange, class org.apache.drill.exec.physical.config.HashAggregate, class org.apache.drill.exec.physical.config.SelectionVectorRemover, class org.apache.drill.exec.store.hive.HiveScan, class org.apache.drill.exec.store.hbase.HBaseSubScan, class org.apache.drill.exec.physical.config.Project, class org.apache.drill.exec.store.kudu.KuduSubScan, class org.apache.drill.exec.store.mock.MockGroupScanPOP, class org.apache.drill.exec.physical.config.MergeJoinPOP, class org.apache.drill.exec.physical.config.WindowPOP, class org.apache.drill.exec.physical.config.BroadcastExchange, class org.apache.drill.exec.physical.config.SingleMergeExchange, class org.apache.drill.exec.physical.config.Limit, class org.apache.drill.exec.physical.config.UnorderedDeMuxExchange, class org.apache.drill.exec.physical.config.StreamingAggregate, class org.apache.drill.exec.physical.config.HashToMergeExchange, class org.apache.drill.exec.physical.config.UnionAll, class org.apache.drill.exec.store.hbase.HBaseGroupScan, class org.apache.drill.exec.physical.config.BroadcastSender, class org.apache.drill.exec.store.mongo.MongoGroupScan, class org.apache.drill.exec.store.kudu.KuduGroupScan, class org.apache.drill.exec.store.parquet.ParquetGroupScan, class org.apache.drill.exec.physical.config.NestedLoopJoinPOP, class org.apache.drill.exec.physical.config.UnorderedReceiver, class org.apache.drill.exec.store.dfs.easy.EasyWriter, class org.apache.drill.exec.physical.config.HashPartitionSender, class org.apache.drill.exec.store.ischema.InfoSchemaGroupScan, class org.apache.drill.exec.physical.config.SingleSender, class org.apache.drill.exec.store.hive.HiveDrillNativeParquetScan, class org.apache.drill.exec.physical.config.MergingReceiverPOP, class org.apache.drill.exec.store.mock.MockStorePOP, class org.apache.drill.exec.physical.config.HashJoinPOP, class org.apache.drill.exec.store.direct.DirectSubScan, class org.apache.drill.exec.physical.config.OrderedPartitionExchange, class org.apache.drill.exec.physical.config.IteratorValidator, class org.apache.drill.exec.store.hive.HiveDrillNativeParquetSubScan, class org.apache.drill.exec.physical.config.Sort, class org.apache.drill.exec.store.mongo.MongoSubScan, class org.apache.drill.exec.physical.config.Trace, class org.apache.drill.exec.physical.config.RangeSender, class org.apache.drill.exec.store.indexr.IndexRGroupScan, class org.apache.drill.exec.store.parquet.ParquetWriter, class org.apache.drill.exec.store.dfs.easy.EasySubScan, class org.apache.drill.exec.physical.config.Screen, class org.apache.drill.exec.physical.config.OrderedPartitionSender, class org.apache.drill.exec.store.mock.MockSubScanPOP, class org.apache.drill.exec.physical.config.Values, class org.apache.drill.exec.store.sys.SystemTableScan, class org.apache.drill.exec.physical.config.ComplexToJson, class org.apache.drill.exec.physical.config.ProducerConsumer, class org.apache.drill.exec.physical.config.Filter, class org.apache.drill.exec.physical.config.TopN, class org.apache.drill.exec.store.kudu.KuduWriter, class org.apache.drill.exec.store.jdbc.JdbcGroupScan, class org.apache.drill.exec.store.indexr.IndexRSubScan, class org.apache.drill.exec.physical.config.ExternalSort, class org.apache.drill.exec.store.dfs.easy.EasyGroupScan, class org.apache.drill.exec.store.direct.DirectGroupScan, class org.apache.drill.exec.store.parquet.ParquetRowGroupScan].
2017-01-17 08:55:07,508 [main] DEBUG org.apache.drill.common.JSONOptions - Creating Deserializer.
2017-01-17 08:55:07,540 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 38 classes for org.apache.drill.exec.physical.impl.BatchCreator took 14ms
2017-01-17 08:55:07,543 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 5 classes for org.apache.drill.exec.physical.impl.RootCreator took 2ms
2017-01-17 08:55:07,544 [main] DEBUG o.a.d.e.p.i.OperatorCreatorRegistry - Adding Operator Creator map: {class org.apache.drill.exec.physical.config.Sort=public org.apache.drill.exec.physical.impl.sort.SortBatchCreator(), class org.apache.drill.exec.store.jdbc.JdbcSubScan=public org.apache.drill.exec.store.jdbc.JdbcBatchCreator(), class org.apache.drill.exec.store.mongo.MongoSubScan=public org.apache.drill.exec.store.mongo.MongoScanBatchCreator(), class org.apache.drill.exec.physical.config.FlattenPOP=public org.apache.drill.exec.physical.impl.flatten.FlattenBatchCreator(), class org.apache.drill.exec.physical.config.Trace=public org.apache.drill.exec.physical.impl.trace.TraceBatchCreator(), class org.apache.drill.exec.store.ischema.InfoSchemaSubScan=public org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator(), class org.apache.drill.exec.store.hive.HiveSubScan=public org.apache.drill.exec.store.hive.HiveScanBatchCreator(), class org.apache.drill.exec.physical.config.HashAggregate=public org.apache.drill.exec.physical.impl.aggregate.HashAggBatchCreator(), class org.apache.drill.exec.physical.config.SelectionVectorRemover=public org.apache.drill.exec.physical.impl.svremover.SVRemoverCreator(), class org.apache.drill.exec.store.parquet.ParquetWriter=public org.apache.drill.exec.store.parquet.ParquetWriterBatchCreator(), class org.apache.drill.exec.store.hbase.HBaseSubScan=public org.apache.drill.exec.store.hbase.HBaseScanBatchCreator(), class org.apache.drill.exec.store.dfs.easy.EasySubScan=public org.apache.drill.exec.store.dfs.easy.EasyReaderBatchCreator(), class org.apache.drill.exec.physical.config.Screen=public org.apache.drill.exec.physical.impl.ScreenCreator(), class org.apache.drill.exec.physical.config.Project=public org.apache.drill.exec.physical.impl.project.ProjectBatchCreator(), class org.apache.drill.exec.store.kudu.KuduSubScan=public org.apache.drill.exec.store.kudu.KuduScanBatchCreator(), class org.apache.drill.exec.physical.config.OrderedPartitionSender=public org.apache.drill.exec.physical.impl.orderedpartitioner.OrderedPartitionSenderCreator(), class org.apache.drill.exec.physical.config.MergeJoinPOP=public org.apache.drill.exec.physical.impl.join.MergeJoinCreator(), class org.apache.drill.exec.physical.config.WindowPOP=public org.apache.drill.exec.physical.impl.window.WindowFrameBatchCreator(), class org.apache.drill.exec.store.mock.MockSubScanPOP=public org.apache.drill.exec.store.mock.MockScanBatchCreator(), class org.apache.drill.exec.physical.config.Limit=public org.apache.drill.exec.physical.impl.limit.LimitBatchCreator(), class org.apache.drill.exec.physical.config.Values=public org.apache.drill.exec.physical.impl.values.ValuesBatchCreator(), class org.apache.drill.exec.physical.config.StreamingAggregate=public org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatchCreator(), class org.apache.drill.exec.store.sys.SystemTableScan=public org.apache.drill.exec.store.sys.SystemTableBatchCreator(), class org.apache.drill.exec.physical.config.ComplexToJson=public org.apache.drill.exec.physical.impl.project.ComplexToJsonBatchCreator(), class org.apache.drill.exec.physical.config.ProducerConsumer=public org.apache.drill.exec.physical.impl.producer.ProducerConsumerBatchCreator(), class org.apache.drill.exec.physical.config.UnionAll=public org.apache.drill.exec.physical.impl.union.UnionAllBatchCreator(), class org.apache.drill.exec.physical.config.BroadcastSender=public org.apache.drill.exec.physical.impl.broadcastsender.BroadcastSenderCreator(), class org.apache.drill.exec.physical.config.Filter=public org.apache.drill.exec.physical.impl.filter.FilterBatchCreator(), class org.apache.drill.exec.physical.config.NestedLoopJoinPOP=public org.apache.drill.exec.physical.impl.join.NestedLoopJoinBatchCreator(), class org.apache.drill.exec.physical.config.UnorderedReceiver=public org.apache.drill.exec.physical.impl.unorderedreceiver.UnorderedReceiverCreator(), class org.apache.drill.exec.physical.config.TopN=public org.apache.drill.exec.physical.impl.TopN.TopNSortBatchCreator(), class org.apache.drill.exec.store.dfs.easy.EasyWriter=public org.apache.drill.exec.store.dfs.easy.EasyWriterBatchCreator(), class org.apache.drill.exec.physical.config.HashPartitionSender=public org.apache.drill.exec.physical.impl.partitionsender.PartitionSenderCreator(), class org.apache.drill.exec.store.kudu.KuduWriter=public org.apache.drill.exec.store.kudu.KuduWriterBatchCreator(), class org.apache.drill.exec.physical.config.SingleSender=public org.apache.drill.exec.physical.impl.SingleSenderCreator(), class org.apache.drill.exec.physical.config.MergingReceiverPOP=public org.apache.drill.exec.physical.impl.MergingReceiverCreator(), class org.apache.drill.exec.physical.config.HashJoinPOP=public org.apache.drill.exec.physical.impl.join.HashJoinBatchCreator(), class org.apache.drill.exec.store.indexr.IndexRSubScan=public org.apache.drill.exec.store.indexr.IndexRScanBatchCreator(), class org.apache.drill.exec.physical.config.ExternalSort=public org.apache.drill.exec.physical.impl.xsort.ExternalSortBatchCreator(), class org.apache.drill.exec.store.direct.DirectSubScan=public org.apache.drill.exec.store.direct.DirectBatchCreator(), class org.apache.drill.exec.physical.config.IteratorValidator=public org.apache.drill.exec.physical.impl.validate.IteratorValidatorCreator(), class org.apache.drill.exec.store.parquet.ParquetRowGroupScan=public org.apache.drill.exec.store.parquet.ParquetScanBatchCreator(), class org.apache.drill.exec.store.hive.HiveDrillNativeParquetSubScan=public org.apache.drill.exec.store.hive.HiveDrillNativeScanBatchCreator()}
2017-01-17 08:55:07,567 [main] DEBUG o.a.d.e.e.f.FunctionImplementationRegistry - Generating function registry.
2017-01-17 08:55:07,903 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 1 classes for org.apache.drill.exec.expr.fn.PluggableFunctionRegistry took 3ms
2017-01-17 08:55:08,482 [main] INFO o.a.d.c.scanner.ClassPathScanner - Scanning packages [org.apache.drill.exec.store.mongo, org.apache.hadoop.hive, org.apache.drill.exec.store.kudu, org.apache.drill.exec.store.mock, org.apache.drill.common.logical, org.apache.drill.exec.store.jdbc, org.apache.drill.exec.expr, org.apache.drill.exec.physical, org.apache.drill.exec.store, org.apache.drill.exec.rpc.user.security, org.apache.drill.exec.store.hbase, org.apache.drill.exec.expr.fn.impl.conv, org.apache.drill.exec.store.indexr, org.apache.hadoop.hive, org.apache.drill.exec.fn.hive] in locations [jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-jdbc-storage-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-hive-exec-shaded-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-kudu-storage-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hbase-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-indexr-storage-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-storage-hive-core-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-gis-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-memory-base-1.9.0.jar!/, jar:file:/home/zhou/apache-drill-1.9.0/jars/drill-mongo-storage-1.9.0.jar!/] took 577ms
2017-01-17 08:55:08,540 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 114 classes for org.apache.hadoop.hive.ql.udf.generic.GenericUDF took 56ms
2017-01-17 08:55:08,829 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 68 classes for org.apache.hadoop.hive.ql.exec.UDF took 143ms
2017-01-17 08:55:08,846 [main] INFO o.a.d.e.e.f.FunctionImplementationRegistry - Function registry loaded. 429 functions loaded in 1279 ms.
2017-01-17 08:55:08,869 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 10 classes for org.apache.drill.exec.store.StoragePlugin took 13ms
2017-01-17 08:55:08,869 [main] DEBUG o.a.d.e.s.StoragePluginRegistryImpl - Found 10 storage plugin configuration classes:
- class org.apache.drill.exec.store.hive.HiveStoragePlugin
- class org.apache.drill.exec.store.mock.MockStorageEngine
- class org.apache.drill.exec.store.jdbc.JdbcStoragePlugin
- class org.apache.drill.exec.store.mongo.MongoStoragePlugin
- class org.apache.drill.exec.store.sys.SystemTablePlugin
- class org.apache.drill.exec.store.dfs.FileSystemPlugin
- class org.apache.drill.exec.store.ischema.InfoSchemaStoragePlugin
- class org.apache.drill.exec.store.hbase.HBaseStoragePlugin
- class org.apache.drill.exec.store.indexr.IndexRStoragePlugin
- class org.apache.drill.exec.store.kudu.KuduStoragePlugin.
2017-01-17 08:55:08,884 [main] DEBUG o.a.d.e.s.h.HBaseStoragePluginConfig - Initializing HBase StoragePlugin configuration with zookeeper quorum 'localhost', port '2181'.
2017-01-17 08:55:09,000 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 6 classes for org.apache.drill.exec.store.dfs.FormatPlugin took 31ms
2017-01-17 08:55:09,024 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 0ms
2017-01-17 08:55:09,025 [main] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:09,047 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 6 classes for org.apache.drill.exec.store.dfs.FormatPlugin took 0ms
2017-01-17 08:55:09,050 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 0ms
2017-01-17 08:55:09,051 [main] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:09,051 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 0ms
2017-01-17 08:55:09,051 [main] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:09,051 [main] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 0ms
2017-01-17 08:55:09,051 [main] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:09,790 [main] INFO o.a.d.e.s.indexr.IndexRStoragePlugin - Plugin started
2017-01-17 08:55:09,852 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:09,906 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 1 total bits. New active drillbits:
host-170-19:31010:31011:31012

2017-01-17 08:55:09,907 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:09,911 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 3 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-13:31010:31011:31012
host-170-19:31010:31011:31012

2017-01-17 08:55:09,912 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:09,919 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 3 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-13:31010:31011:31012
host-170-19:31010:31011:31012

2017-01-17 08:55:09,922 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:09,925 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 4 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-13:31010:31011:31012
host-170-14:31010:31011:31012
host-170-19:31010:31011:31012

2017-01-17 08:55:09,938 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:09,942 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 5 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-13:31010:31011:31012
host-170-18:31010:31011:31012
host-170-14:31010:31011:31012
host-170-19:31010:31011:31012

2017-01-17 08:55:09,946 [main] INFO o.a.drill.exec.server.rest.WebServer - Setting up HTTP connector for web server
2017-01-17 08:55:09,946 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:09,950 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 6 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-13:31010:31011:31012
host-170-18:31010:31011:31012
host-170-14:31010:31011:31012
host-170-12:31010:31011:31012
host-170-19:31010:31011:31012

2017-01-17 08:55:10,009 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:10,015 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 7 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-15:31010:31011:31012
host-170-13:31010:31011:31012
host-170-18:31010:31011:31012
host-170-14:31010:31011:31012
host-170-12:31010:31011:31012
host-170-19:31010:31011:31012

2017-01-17 08:55:10,171 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:10,176 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 8 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-15:31010:31011:31012
host-170-13:31010:31011:31012
host-170-18:31010:31011:31012
host-170-14:31010:31011:31012
host-170-12:31010:31011:31012
host-170-19:31010:31011:31012
host-170-17:31010:31011:31012

2017-01-17 08:55:10,204 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Got cache changed --> updating endpoints
2017-01-17 08:55:10,209 [Curator-ServiceCache-0] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Active drillbit set changed. Now includes 9 total bits. New active drillbits:
host-170-20:31010:31011:31012
host-170-15:31010:31011:31012
host-170-13:31010:31011:31012
host-170-18:31010:31011:31012
host-170-14:31010:31011:31012
host-170-16:31010:31011:31012
host-170-12:31010:31011:31012
host-170-19:31010:31011:31012
host-170-17:31010:31011:31012

2017-01-17 08:55:10,731 [main] INFO o.apache.drill.exec.server.Drillbit - Startup completed (3420 ms).
2017-01-17 08:55:10,731 [main] DEBUG o.apache.drill.exec.server.Drillbit - Started new Drillbit.
2017-01-17 08:55:52,593 [qtp1052247420-163] DEBUG o.a.drill.exec.client.DrillClient - Connecting to server host-170-12:31010
2017-01-17 08:55:52,701 [USER-rpc-event-queue] DEBUG o.a.drill.exec.rpc.user.UserServer - Received query to run. Returning query handle.
2017-01-17 08:55:52,916 [USER-rpc-event-queue] DEBUG o.a.drill.exec.rpc.user.UserServer - Sending response with Sender 2129727549
2017-01-17 08:55:52,917 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id 27829467-0663-aa9b-bac3-7144f9d76eaf: select * from indexr.test
2017-01-17 08:55:52,918 [USER-rpc-event-queue] DEBUG o.a.d.e.rpc.user.QueryResultHandler - Received QueryId 27829467-0663-aa9b-bac3-7144f9d76eaf successfully. Adding results listener org.apache.drill.exec.server.rest.QueryWrapper$Listener@4c0bccf.
2017-01-17 08:55:52,936 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.s.h.HBaseStoragePluginConfig - Initializing HBase StoragePlugin configuration with zookeeper quorum 'localhost', port '2181'.
2017-01-17 08:55:52,960 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] INFO o.a.d.c.s.persistence.ScanResult - loading 6 classes for org.apache.drill.exec.store.dfs.FormatPlugin took 0ms
2017-01-17 08:55:52,962 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 0ms
2017-01-17 08:55:52,962 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:52,962 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 0ms
2017-01-17 08:55:52,962 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:52,962 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] INFO o.a.d.c.s.persistence.ScanResult - loading 7 classes for org.apache.drill.common.logical.FormatPluginConfig took 0ms
2017-01-17 08:55:52,962 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.c.l.FormatPluginConfigBase - Found 7format plugin configuration classes:
org.apache.drill.exec.store.easy.sequencefile.SequenceFileFormatConfig
org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig
org.apache.drill.exec.store.avro.AvroFormatConfig
org.apache.drill.exec.store.easy.text.TextFormatPlugin$TextFormatConfig
org.apache.drill.exec.store.dfs.NamedFormatPluginConfig
org.apache.drill.exec.store.parquet.ParquetFormatConfig
org.apache.drill.exec.store.httpd.HttpdLogFormatPlugin$HttpdLogFormatConfig

2017-01-17 08:55:53,076 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.drill.exec.store.SchemaFactory - Took 141 ms to register schemas.
2017-01-17 08:55:53,420 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - INITIAL:
LogicalProject(date=[$0], d1=[$1], m1=[$2], m2=[$3], m3=[$4], m4=[$5]): rowcount = 100.0, cumulative cost = {200.0 rows, 701.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 3
EnumerableTableScan(table=[[indexr, test]]): rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 2

2017-01-17 08:55:53,470 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - HEP:Window Function rewrites (20ms):
LogicalProject(date=[$0], d1=[$1], m1=[$2], m2=[$3], m3=[$4], m4=[$5]): rowcount = 100.0, cumulative cost = {200.0 rows, 701.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 5
EnumerableTableScan(table=[[indexr, test]]): rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 2

2017-01-17 08:55:53,477 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - HEP_BOTTOM_UP:Directory Prune Planning (5ms):
LogicalProject(date=[$0], d1=[$1], m1=[$2], m2=[$3], m3=[$4], m4=[$5]): rowcount = 100.0, cumulative cost = {200.0 rows, 701.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 9
EnumerableTableScan(table=[[indexr, test]]): rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 2

2017-01-17 08:55:53,530 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.s.indexr.ScanWrokProvider - =============== calStat totalRowCount:1, passRowCount:1, passPackCount:1, statScanRowCount:1, maxPw: 2
2017-01-17 08:55:53,536 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,547 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,548 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,549 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - VOLCANO:Logical Planning (no pruning or join). (71ms):
DrillScanRel(table=[[indexr, test]], groupscan=[IndexRGroupScan@57996b88{Spec=IndexRScanSpec@6a7177{table:test, rsFilter:null}, columns=[date, d1, m1, m2, m3, m4]}]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 16

2017-01-17 08:55:53,552 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,552 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,552 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - HEP_BOTTOM_UP:Partition Prune Planning (3ms):
DrillScanRel(table=[[indexr, test]], groupscan=[IndexRGroupScan@57996b88{Spec=IndexRScanSpec@6a7177{table:test, rsFilter:null}, columns=[date, d1, m1, m2, m3, m4]}]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 16

2017-01-17 08:55:53,553 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,553 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,553 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - HEP_BOTTOM_UP:LOPT Join Planning (0ms):
DrillScanRel(table=[[indexr, test]], groupscan=[IndexRGroupScan@57996b88{Spec=IndexRScanSpec@6a7177{table:test, rsFilter:null}, columns=[date, d1, m1, m2, m3, m4]}]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 16

2017-01-17 08:55:53,554 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,554 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,555 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - HEP_BOTTOM_UP:Convert SUM to $SUM0 (1ms):
DrillScanRel(table=[[indexr, test]], groupscan=[IndexRGroupScan@57996b88{Spec=IndexRScanSpec@6a7177{table:test, rsFilter:null}, columns=[date, d1, m1, m2, m3, m4]}]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 16

2017-01-17 08:55:53,597 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getMaxParallelizationWidth 2
2017-01-17 08:55:53,599 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,603 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,604 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - VOLCANO:Physical Planning (47ms):
ScreenPrel: rowcount = 100000.0, cumulative cost = {210000.0 rows, 1410000.0 cpu, 0.0 io, 2.4576E9 network, 0.0 memory}, id = 71
UnionExchangePrel: rowcount = 100000.0, cumulative cost = {200000.0 rows, 1400000.0 cpu, 0.0 io, 2.4576E9 network, 0.0 memory}, id = 70
ProjectPrel(date=[$0], d1=[$1], m1=[$2], m2=[$3], m3=[$4], m4=[$5]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 69
ScanPrel(groupscan=[IndexRGroupScan@51174a4e{Spec=IndexRScanSpec@6a7177{table:test, rsFilter:null}, columns=[date, d1, m1, m2, m3, m4]}]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 58

2017-01-17 08:55:53,610 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,610 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,611 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - HEP_BOTTOM_UP:Physical Partition Prune Planning (3ms):
ScreenPrel: rowcount = 100000.0, cumulative cost = {210000.0 rows, 1410000.0 cpu, 0.0 io, 2.4576E9 network, 0.0 memory}, id = 88
UnionExchangePrel: rowcount = 100000.0, cumulative cost = {200000.0 rows, 1400000.0 cpu, 0.0 io, 2.4576E9 network, 0.0 memory}, id = 86
ProjectPrel(date=[$0], d1=[$1], m1=[$2], m2=[$3], m3=[$4], m4=[$5]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 84
ScanPrel(groupscan=[IndexRGroupScan@2968382e{Spec=IndexRScanSpec@6a7177{table:test, rsFilter:null}, columns=[date, d1, m1, m2, m3, m4]}]): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 79

2017-01-17 08:55:53,620 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,621 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,621 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getMaxParallelizationWidth 2
2017-01-17 08:55:53,621 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getMinParallelizationWidth 1
2017-01-17 08:55:53,621 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,626 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,626 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,627 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - Drill Physical:
00-00 Screen : rowType = RecordType(INTEGER date, VARCHAR(65535) d1, INTEGER m1, BIGINT m2, FLOAT m3, DOUBLE m4): rowcount = 100000.0, cumulative cost = {210000.0 rows, 1410000.0 cpu, 0.0 io, 2.4576E9 network, 0.0 memory}, id = 114
00-01 Project(date=[$0], d1=[$1], m1=[$2], m2=[$3], m3=[$4], m4=[$5]) : rowType = RecordType(INTEGER date, VARCHAR(65535) d1, INTEGER m1, BIGINT m2, FLOAT m3, DOUBLE m4): rowcount = 100000.0, cumulative cost = {200000.0 rows, 1400000.0 cpu, 0.0 io, 2.4576E9 network, 0.0 memory}, id = 113
00-02 UnionExchange : rowType = RecordType(INTEGER date, VARCHAR(65535) d1, INTEGER m1, BIGINT m2, FLOAT m3, DOUBLE m4): rowcount = 100000.0, cumulative cost = {200000.0 rows, 1400000.0 cpu, 0.0 io, 2.4576E9 network, 0.0 memory}, id = 112
01-01 Project(date=[$0], d1=[$1], m1=[$2], m2=[$3], m3=[$4], m4=[$5]) : rowType = RecordType(INTEGER date, VARCHAR(65535) d1, INTEGER m1, BIGINT m2, FLOAT m3, DOUBLE m4): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 111
01-02 Scan(groupscan=[IndexRGroupScan@545b6f2a{Spec=IndexRScanSpec@6a7177{table:test, rsFilter:null}, columns=[date, d1, m1, m2, m3, m4]}]) : rowType = RecordType(INTEGER date, VARCHAR(65535) d1, INTEGER m1, BIGINT m2, FLOAT m3, DOUBLE m4): rowcount = 100000.0, cumulative cost = {100000.0 rows, 600000.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 110

2017-01-17 08:55:53,628 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getScanStats, scanRowCount: 1
2017-01-17 08:55:53,694 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.s.h.DefaultSqlHandler - Drill Plan :
{
"head" : {
"version" : 1,
"generator" : {
"type" : "DefaultSqlHandler",
"info" : ""
},
"type" : "APACHE_DRILL_PHYSICAL",
"options" : [ ],
"queue" : 0,
"resultMode" : "EXEC"
},
"graph" : [ {
"pop" : "indexr-scan",
"@id" : 65538,
"userName" : "root",
"indexrScanSpec" : {
"tableName" : "test",
"rsFilter" : null
},
"storage" : {
"type" : "indexr",
"enabled" : true
},
"columns" : [ "date", "d1", "m1", "m2", "m3", "m4" ],
"limitScanRows" : 9223372036854775807,
"scanId" : "ae746a55-90c2-48e7-aaa6-85156af1e33e",
"cost" : 100000.0
}, {
"pop" : "project",
"@id" : 65537,
"exprs" : [ {
"ref" : "date",
"expr" : "date"
}, {
"ref" : "d1",
"expr" : "d1"
}, {
"ref" : "m1",
"expr" : "m1"
}, {
"ref" : "m2",
"expr" : "m2"
}, {
"ref" : "m3",
"expr" : "m3"
}, {
"ref" : "m4",
"expr" : "m4"
} ],
"child" : 65538,
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
}, {
"pop" : "union-exchange",
"@id" : 2,
"child" : 65537,
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
}, {
"pop" : "project",
"@id" : 1,
"exprs" : [ {
"ref" : "date",
"expr" : "date"
}, {
"ref" : "d1",
"expr" : "d1"
}, {
"ref" : "m1",
"expr" : "m1"
}, {
"ref" : "m2",
"expr" : "m2"
}, {
"ref" : "m3",
"expr" : "m3"
}, {
"ref" : "m4",
"expr" : "m4"
} ],
"child" : 2,
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
}, {
"pop" : "screen",
"@id" : 0,
"child" : 1,
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
} ]
}
2017-01-17 08:55:53,700 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getMaxParallelizationWidth 2
2017-01-17 08:55:53,700 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - =============== getMinParallelizationWidth 1
2017-01-17 08:55:53,701 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] INFO o.a.d.e.s.indexr.ScanWrokProvider - =============== calScanWorks limitScanRows: 9223372036854775807, Pass rate: 100.00%, scan row: 1
2017-01-17 08:55:53,714 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.s.indexr.ScanWrokProvider - =============== calScanWorks minPw: 1, maxPw: 2, historyWorks: [ScanCompleteWork{segment=dt=20160701/000000_0, startPackId=0, endPackId=1, totalBytes=128, hosts:=host-170-19,host-170-17,}], realtimeWorks(0): {}, endpointAffinities: [EndpointAffinity [endpoint=address: "host-170-19" user_port: 31010 control_port: 31011 data_port: 31012, affinity=128.0, mandatory=true, maxWidth=2147483647]]
2017-01-17 08:55:53,717 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.s.schedule.AssignmentCreator - Took 1 ms to assign 1 work units to 1 fragments
2017-01-17 08:55:53,718 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - ===================== applyAssignments endpoints:[address: "host-170-19"
user_port: 31010
control_port: 31011
data_port: 31012
]
2017-01-17 08:55:53,718 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - ===================== applyAssignments endpointToWorks:{address: "host-170-19"
user_port: 31010
control_port: 31011
data_port: 31012
=[ScanCompleteWork{segment=dt=20160701/000000_0, startPackId=0, endPackId=1, totalBytes=128, hosts:=host-170-19,host-170-17,}]}
2017-01-17 08:55:53,718 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.store.indexr.IndexRGroupScan - ===================== applyAssignments assignments:{0=FragmentAssignment{fragmentCount=1, fragmentIndex=0, endpointWorks=[RangeWork{segment='dt=20160701/000000_0', startPackId=0, endPackId=1}]}}
2017-01-17 08:55:53,754 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.f.SimpleParallelizer - Root fragment:
handle {
query_id {
part1: 2847001084661312155
part2: -4989019421133017425
}
major_fragment_id: 0
minor_fragment_id: 0
}
fragment_json: "{
"pop" : "screen",
"@id" : 0,
"child" : {
"pop" : "project",
"@id" : 1,
"exprs" : [ {
"ref" : "date",
"expr" : "date"
}, {
"ref" : "d1",
"expr" : "d1"
}, {
"ref" : "m1",
"expr" : "m1"
}, {
"ref" : "m2",
"expr" : "m2"
}, {
"ref" : "m3",
"expr" : "m3"
}, {
"ref" : "m4",
"expr" : "m4"
} ],
"child" : {
"pop" : "unordered-receiver",
"@id" : 2,
"sender-major-fragment" : 1,
"senders" : [ {
"minorFragmentId" : 0,
"endpoint" : "Cgtob3N0LTE3MC0xORCi8gEYo/IBIKTyAQ=="
} ],
"spooling" : false,
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
},
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
},
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 0.0
}"
leaf_fragment: false
assignment {
address: "host-170-12"
user_port: 31010
control_port: 31011
data_port: 31012
}
foreman {
address: "host-170-12"
user_port: 31010
control_port: 31011
data_port: 31012
}
mem_initial: 3000000
mem_max: 30000000000
credentials {
user_name: "anonymous"
}
options_json: "[ ]"
context {
query_start_time: 1484614552850
time_zone: 505
default_schema_name: ""
}
collector {
opposite_major_fragment_id: 1
incoming_minor_fragment: 0
supports_out_of_order: true
is_spooling: false
}

2017-01-17 08:55:53,763 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.p.f.SimpleParallelizer - Remote fragment:
handle {
query_id {
part1: 2847001084661312155
part2: -4989019421133017425
}
major_fragment_id: 1
minor_fragment_id: 0
}
fragment_json: "{
"pop" : "single-sender",
"@id" : 0,
"receiver-major-fragment" : 0,
"receiver-minor-fragment" : 0,
"child" : {
"pop" : "project",
"@id" : 1,
"exprs" : [ {
"ref" : "date",
"expr" : "date"
}, {
"ref" : "d1",
"expr" : "d1"
}, {
"ref" : "m1",
"expr" : "m1"
}, {
"ref" : "m2",
"expr" : "m2"
}, {
"ref" : "m3",
"expr" : "m3"
}, {
"ref" : "m4",
"expr" : "m4"
} ],
"child" : {
"pop" : "indexr-segments-scan",
"@id" : 2,
"pluginConfig" : {
"type" : "indexr",
"enabled" : true
},
"spec" : {
"scanId" : "ae746a55-90c2-48e7-aaa6-85156af1e33e",
"tableName" : "test",
"scanCount" : 1,
"scanIndex" : 0,
"endpointWorks" : [ {
"segment" : "dt=20160701/000000_0",
"startPackId" : 0,
"endPackId" : 1
} ],
"rsFilter" : null
},
"columns" : [ "date", "d1", "m1", "m2", "m3", "m4" ],
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 0.0
},
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
},
"destination" : "Cgtob3N0LTE3MC0xMhCi8gEYo/IBIKTyAQ==",
"initialAllocation" : 1000000,
"maxAllocation" : 10000000000,
"cost" : 100000.0
}"
leaf_fragment: true
assignment {
address: "host-170-19"
user_port: 31010
control_port: 31011
data_port: 31012
}
foreman {
address: "host-170-12"
user_port: 31010
control_port: 31011
data_port: 31012
}
mem_initial: 2000000
mem_max: 20000000000
credentials {
user_name: "anonymous"
}
options_json: "[ ]"
context {
query_start_time: 1484614552850
time_zone: 505
default_schema_name: ""
}

2017-01-17 08:55:53,763 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.exec.rpc.control.WorkEventBus - Adding fragment status listener for queryId 27829467-0663-aa9b-bac3-7144f9d76eaf.
2017-01-17 08:55:53,763 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.drill.exec.work.foreman.Foreman - Submitting fragments to run.
2017-01-17 08:55:53,767 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.drill.exec.ops.FragmentContext - Getting initial memory allocation of 3000000
2017-01-17 08:55:53,767 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.drill.exec.ops.FragmentContext - Fragment max allocation: 30000000000
2017-01-17 08:55:53,776 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.e.work.batch.IncomingBuffers - Came up with a list of 1 required fragments. Fragments {1=org.apache.drill.exec.work.batch.MergingCollector@285411bd}
2017-01-17 08:55:53,797 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.d.exec.rpc.control.WorkEventBus - Manager created: 27829467-0663-aa9b-bac3-7144f9d76eaf:0:0
2017-01-17 08:55:53,800 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.drill.exec.work.foreman.Foreman - Sending remote fragments to
Node:
address: "host-170-19"
user_port: 31010
control_port: 31011
data_port: 31012

Data:
fragment {
handle {
query_id {
part1: 2847001084661312155
part2: -4989019421133017425
}
major_fragment_id: 1
minor_fragment_id: 0
}
fragment_json: "{\n "pop" : "single-sender",\n "@id" : 0,\n "receiver-major-fragment" : 0,\n "receiver-minor-fragment" : 0,\n "child" : {\n "pop" : "project",\n "@id" : 1,\n "exprs" : [ {\n "ref" : "date",\n "expr" : "date"\n }, {\n "ref" : "d1",\n "expr" : "d1"\n }, {\n "ref" : "m1",\n "expr" : "m1"\n }, {\n "ref" : "m2",\n "expr" : "m2"\n }, {\n "ref" : "m3",\n "expr" : "m3"\n }, {\n "ref" : "m4",\n "expr" : "m4"\n } ],\n "child" : {\n "pop" : "indexr-segments-scan",\n "@id" : 2,\n "pluginConfig" : {\n "type" : "indexr",\n "enabled" : true\n },\n "spec" : {\n "scanId" : "ae746a55-90c2-48e7-aaa6-85156af1e33e",\n "tableName" : "test",\n "scanCount" : 1,\n "scanIndex" : 0,\n "endpointWorks" : [ {\n "segment" : "dt=20160701/000000_0",\n "startPackId" : 0,\n "endPackId" : 1\n } ],\n "rsFilter" : null\n },\n "columns" : [ "date", "d1", "m1", "m2", "m3", "m4" ],\n "initialAllocation" : 1000000,\n "maxAllocation" : 10000000000,\n "cost" : 0.0\n },\n "initialAllocation" : 1000000,\n "maxAllocation" : 10000000000,\n "cost" : 100000.0\n },\n "destination" : "Cgtob3N0LTE3MC0xMhCi8gEYo/IBIKTyAQ==",\n "initialAllocation" : 1000000,\n "maxAllocation" : 10000000000,\n "cost" : 100000.0\n}"
leaf_fragment: true
assignment {
address: "host-170-19"
user_port: 31010
control_port: 31011
data_port: 31012
}
foreman {
address: "host-170-12"
user_port: 31010
control_port: 31011
data_port: 31012
}
mem_initial: 2000000
mem_max: 20000000000
credentials {
user_name: "anonymous"
}
options_json: "[ ]"
context {
query_start_time: 1484614552850
time_zone: 505
default_schema_name: ""
}
}

2017-01-17 08:55:53,869 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.drill.exec.work.foreman.Foreman - 27829467-0663-aa9b-bac3-7144f9d76eaf: State change requested STARTING --> RUNNING
2017-01-17 08:55:53,875 [27829467-0663-aa9b-bac3-7144f9d76eaf:foreman] DEBUG o.a.drill.exec.work.foreman.Foreman - Fragments running.
2017-01-17 08:55:54,206 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to QueryManager of profile {
state: RUNNING
minor_fragment_id: 0
operator_profile {
input_profile {
records: 0
batches: 0
schemas: 0
}
operator_id: 2
operator_type: 24
setup_nanos: 0
process_nanos: 59886068
peak_local_memory_allocated: 2621440
wait_nanos: 0
}
operator_profile {
input_profile {
records: 0
batches: 0
schemas: 0
}
operator_id: 1
operator_type: 10
setup_nanos: 0
process_nanos: 0
peak_local_memory_allocated: 0
wait_nanos: 0
}
operator_profile {
input_profile {
records: 0
batches: 0
schemas: 0
}
operator_id: 0
operator_type: 0
setup_nanos: 0
process_nanos: 0
peak_local_memory_allocated: 0
wait_nanos: 0
}
start_time: 1484614553889
end_time: 1484614554147
memory_used: 4621440
max_memory_used: 4621440
endpoint {
address: "host-170-19"
user_port: 31010
control_port: 31011
data_port: 31012
}
}
handle {
query_id {
part1: 2847001084661312155
part2: -4989019421133017425
}
major_fragment_id: 1
minor_fragment_id: 0
}

2017-01-17 08:55:54,206 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.rpc.control.ControlServer - Sending response with Sender 1644047567
2017-01-17 08:55:54,728 [DATA-rpc-event-queue] DEBUG o.a.drill.exec.rpc.data.DataServer - Sending response with Sender 1975780582
2017-01-17 08:55:54,729 [DATA-rpc-event-queue] DEBUG o.a.drill.exec.rpc.data.DataServer - Sending response with Sender 2021802449
2017-01-17 08:55:54,736 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to QueryManager of profile {
state: FINISHED
minor_fragment_id: 0
operator_profile {
input_profile {
records: 0
batches: 0
schemas: 0
}
operator_id: 2
operator_type: 24
setup_nanos: 0
process_nanos: 126600820
peak_local_memory_allocated: 2621440
wait_nanos: 0
}
operator_profile {
input_profile {
records: 0
batches: 1
schemas: 1
}
operator_id: 1
operator_type: 10
setup_nanos: 414607499
process_nanos: 1543542
peak_local_memory_allocated: 0
wait_nanos: 0
}
operator_profile {
input_profile {
records: 0
batches: 1
schemas: 1
}
operator_id: 0
operator_type: 0
setup_nanos: 0
process_nanos: 15359336
peak_local_memory_allocated: 0
metric {
metric_id: 0
long_value: 0
}
wait_nanos: 39317390
}
start_time: 1484614553889
end_time: 1484614554731
memory_used: 0
max_memory_used: 4621440
endpoint {
address: "host-170-19"
user_port: 31010
control_port: 31011
data_port: 31012
}
}
handle {
query_id {
part1: 2847001084661312155
part2: -4989019421133017425
}
major_fragment_id: 1
minor_fragment_id: 0
}

2017-01-17 08:55:54,736 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.work.foreman.QueryManager - Foreman is still waiting for completion message from 1 nodes containing 2 fragments
2017-01-17 08:55:54,736 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.rpc.control.ControlServer - Sending response with Sender 991840384
2017-01-17 08:55:54,747 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.e.physical.impl.BaseRootExec - BaseRootExec(132177394) operators: org.apache.drill.exec.physical.impl.project.ProjectRecordBatch 1742284505, org.apache.drill.exec.physical.impl.unorderedreceiver.UnorderedReceiverBatch 375486921
2017-01-17 08:55:54,748 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.exec.physical.impl.ImplCreator - Took 17 ms to create RecordBatch tree
2017-01-17 08:55:54,748 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - 27829467-0663-aa9b-bac3-7144f9d76eaf:0:0: State change requested AWAITING_ALLOCATION --> RUNNING
2017-01-17 08:55:54,749 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] INFO o.a.d.e.w.f.FragmentStatusReporter - 27829467-0663-aa9b-bac3-7144f9d76eaf:0:0: State to report: RUNNING
2017-01-17 08:55:54,770 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.e.w.fragment.FragmentExecutor - Starting fragment 0:0 on host-170-12:31010
2017-01-17 08:55:54,773 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to QueryManager of profile {
state: RUNNING
minor_fragment_id: 0
operator_profile {
input_profile {
records: 0
batches: 0
schemas: 0
}
operator_id: 2
operator_type: 11
setup_nanos: 0
process_nanos: 0
peak_local_memory_allocated: 0
metric {
metric_id: 1
long_value: 1
}
wait_nanos: 0
}
operator_profile {
input_profile {
records: 0
batches: 0
schemas: 0
}
operator_id: 1
operator_type: 10
setup_nanos: 0
process_nanos: 0
peak_local_memory_allocated: 0
wait_nanos: 0
}
operator_profile {
input_profile {
records: 0
batches: 0
schemas: 0
}
operator_id: 0
operator_type: 13
setup_nanos: 0
process_nanos: 0
peak_local_memory_allocated: 0
wait_nanos: 0
}
start_time: 1484614553771
end_time: 1484614554749
memory_used: 3000000
max_memory_used: 3000000
endpoint {
address: "host-170-12"
user_port: 31010
control_port: 31011
data_port: 31012
}
}
handle {
query_id {
part1: 2847001084661312155
part2: -4989019421133017425
}
major_fragment_id: 0
minor_fragment_id: 0
}

2017-01-17 08:55:54,773 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.rpc.control.ControlServer - Sending response with Sender 716621325
2017-01-17 08:55:54,946 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.e.compile.JaninoClassCompiler - Compiling (source size=569 B):
1:
2: package org.apache.drill.exec.test.generated;
3:
4: import org.apache.drill.exec.exception.SchemaChangeException;
5: import org.apache.drill.exec.ops.FragmentContext;
6: import org.apache.drill.exec.record.RecordBatch;
7:
8: public class ProjectorGen0 {
9:
10:
11: public void doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing)
12: throws SchemaChangeException
13: {
14: }
15:
16: public void doEval(int inIndex, int outIndex)
17: throws SchemaChangeException
18: {
19: }
20:
21: public void DRILL_INIT()
22: throws SchemaChangeException
23: {
24: }
25:
26: }

2017-01-17 08:55:55,134 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.exec.compile.ClassTransformer - Done compiling (bytecode size=986 B, time:188 millis).
2017-01-17 08:55:55,145 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.e.w.batch.BaseRawBatchBuffer - Got last batch from 1:0
2017-01-17 08:55:55,145 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.e.w.batch.BaseRawBatchBuffer - Streams finished
2017-01-17 08:55:55,147 [USER-rpc-event-queue] DEBUG o.a.d.e.rpc.user.QueryResultHandler - batchArrived: queryId = 27829467-0663-aa9b-bac3-7144f9d76eaf
2017-01-17 08:55:55,148 [USER-rpc-event-queue] DEBUG o.a.drill.exec.rpc.user.UserClient - Sending response with Sender 15269091
2017-01-17 08:55:55,149 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.e.physical.impl.BaseRootExec - closed operator 1742284505
2017-01-17 08:55:55,149 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.e.physical.impl.BaseRootExec - closed operator 375486921
2017-01-17 08:55:55,149 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.exec.ops.OperatorContextImpl - Closing context for org.apache.drill.exec.physical.config.UnorderedReceiver
2017-01-17 08:55:55,149 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.exec.ops.OperatorContextImpl - Closing context for org.apache.drill.exec.physical.config.Project
2017-01-17 08:55:55,149 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] DEBUG o.a.d.exec.ops.OperatorContextImpl - Closing context for org.apache.drill.exec.physical.config.Screen
2017-01-17 08:55:55,150 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - 27829467-0663-aa9b-bac3-7144f9d76eaf:0:0: State change requested RUNNING --> FINISHED
2017-01-17 08:55:55,150 [27829467-0663-aa9b-bac3-7144f9d76eaf:frag:0:0] INFO o.a.d.e.w.f.FragmentStatusReporter - 27829467-0663-aa9b-bac3-7144f9d76eaf:0:0: State to report: FINISHED
2017-01-17 08:55:55,151 [drill-executor-1] DEBUG o.a.d.exec.rpc.control.WorkEventBus - Removing fragment manager: 27829467-0663-aa9b-bac3-7144f9d76eaf:0:0
2017-01-17 08:55:55,152 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to QueryManager of profile {
state: FINISHED
minor_fragment_id: 0
operator_profile {
input_profile {
records: 0
batches: 1
schemas: 1
}
operator_id: 2
operator_type: 11
setup_nanos: 0
process_nanos: 52348556
peak_local_memory_allocated: 4
metric {
metric_id: 0
long_value: 0
}
metric {
metric_id: 1
long_value: 1
}
wait_nanos: 955317
}
operator_profile {
input_profile {
records: 0
batches: 1
schemas: 1
}
operator_id: 1
operator_type: 10
setup_nanos: 310902519
process_nanos: 1510936
peak_local_memory_allocated: 4
wait_nanos: 0
}
operator_profile {
input_profile {
records: 0
batches: 1
schemas: 1
}
operator_id: 0
operator_type: 13
setup_nanos: 0
process_nanos: 9163927
peak_local_memory_allocated: 0
metric {
metric_id: 0
long_value: 0
}
wait_nanos: 3091685
}
start_time: 1484614553771
end_time: 1484614555150
memory_used: 0
max_memory_used: 3000000
endpoint {
address: "host-170-12"
user_port: 31010
control_port: 31011
data_port: 31012
}
}
handle {
query_id {
part1: 2847001084661312155
part2: -4989019421133017425
}
major_fragment_id: 0
minor_fragment_id: 0
}

2017-01-17 08:55:55,153 [CONTROL-rpc-event-queue] DEBUG o.a.drill.exec.work.foreman.Foreman - 27829467-0663-aa9b-bac3-7144f9d76eaf: State change requested RUNNING --> COMPLETED
2017-01-17 08:55:55,154 [CONTROL-rpc-event-queue] DEBUG o.a.drill.exec.work.foreman.Foreman - 27829467-0663-aa9b-bac3-7144f9d76eaf: cleaning up.
2017-01-17 08:55:55,182 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.rpc.control.WorkEventBus - Removing fragment status listener for queryId 27829467-0663-aa9b-bac3-7144f9d76eaf.
2017-01-17 08:55:55,216 [CONTROL-rpc-event-queue] DEBUG o.a.d.exec.rpc.control.ControlServer - Sending response with Sender 1498241540
2017-01-17 08:55:55,217 [USER-rpc-event-queue] DEBUG o.a.d.e.rpc.user.QueryResultHandler - resultArrived: queryState: COMPLETED, queryId = 27829467-0663-aa9b-bac3-7144f9d76eaf
2017-01-17 08:55:55,218 [qtp1052247420-163] DEBUG o.apache.drill.exec.rpc.BasicClient - Closing client
2017-01-17 08:55:55,218 [USER-rpc-event-queue] DEBUG o.a.drill.exec.rpc.user.UserClient - Sending response with Sender 2045361580

from indexr.

flowbehappy avatar flowbehappy commented on August 12, 2024

@aerobe

Looks like you have misconfigured

indexr.fs.connection=hdfs://host-170-12:8022/

According to your hive create table schema, it should be

indexr.fs.connection=hdfs://host-170-12:8020/

from indexr.

aerobe avatar aerobe commented on August 12, 2024

@flowbehappy
follow your suggestion ,i change the port to 8020 ,but it doesn't work
besides i can use drill CTAS SQL to create table on path: hdfs:/tmp
so ,i think connect to hdfs is success

from indexr.

flowbehappy avatar flowbehappy commented on August 12, 2024

@aerobe

Please make sure hive table location and IndexR table location are pointed to the same direction. Otherwise they cannot see each other.

Also check your core-site.xml and hdfs-site.xml in both Hive and Drill. Make sure they are in the same Hadoop system.

from indexr.

KarcyLee avatar KarcyLee commented on August 12, 2024

hi, the same error occurs to me, but I cannot find the way to solve this problem. my indexr.config.properties everywhere is the same, and hive works well. I insert data in hive , query them successfully, but get an empty talbe in drill, i.e, return nothing. please help me!

from indexr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.