Giter Site home page Giter Site logo

spark-solr's Introduction

Lucidworks Spark/Solr Integration

Version Compatibility

The spark-solr project has several releases, each of which support different versions of Spark and Solr. The compatibility chart below shows the versions supported across the past releases. 'Connector' refers to the 'spark-solr' library

Connector Spark Solr

4.0.2

3.1.2

8.11.0

4.0.1

3.1.2

8.11.0

4.0.0

3.1.2

8.9.0

3.10.0

2.4.5

8.9.0

3.9.0

2.4.5

8.3.0

3.8.1

2.4.5

8.3.0

3.8.0

2.4.5

8.3.0

3.7.0

2.4.4

8.3.0

3.6.6

2.4.3

8.2.0

3.6.0

2.4.0

7.5.0

3.5.19

2.3.2

7.6.0

3.4.5

2.2.1

7.2.1

3.3.4

2.2.0

7.1.0

3.2.2

2.2.0

6.6.1

3.1.1

2.1.1

6.6.0

3.0.4

2.1.1

6.5.1

2.4.0

1.6.3

6.4.2

2.3.4

1.6.3

6.3.0

2.2.3

1.6.2

6.1.0

2.1.0

1.6.2

6.1.0

2.0.4

1.6.1

5.5.2

Getting started

Import jar File via spark-shell

cd $SPARK_HOME
./bin/spark-shell --jars spark-solr-{version}-shaded.jar

The shaded jar can be downloaded from the Maven Central or built from the respective branch

Connect to your SolrCloud Instance

via DataFrame

val options = Map(
  "collection" -> "{solr_collection_name}",
  "zkhost" -> "{zk_connect_string}"
)
val df = spark.read.format("solr")
  .options(options)
  .load

via RDD

import com.lucidworks.spark.rdd.SelectSolrRDD
val solrRDD = new SelectSolrRDD(zkHost, collectionName, sc)

SelectSolrRDD is an RDD of SolrDocument

via RDD (Java)

import com.lucidworks.spark.rdd.SolrJavaRDD;
import org.apache.spark.api.java.JavaRDD;

SolrJavaRDD solrRDD = SolrJavaRDD.get(zkHost, collection, jsc.sc());
JavaRDD<SolrDocument> resultsRDD = solrRDD.queryShards(solrQuery);

Download/Build the jar Files

Maven Central

The released jar files (1.1.2, 2.0.0, etc..) can be downloaded from the Maven Central repository. Maven Central also holds the shaded, sources, and javadoc .jars for each release.

<dependency>
   <groupId>com.lucidworks.spark</groupId>
   <artifactId>spark-solr</artifactId>
   <version>{latestVersion}</version>
</dependency>

Snapshots

Snapshots of spark-solr are built for every commit on master branch. The snapshots can be accessed from OSS Sonatype.

Build from Source

mvn clean package -DskipTests

This will build 2 jars in the target directory:

  • spark-solr-${VERSION}.jar

  • spark-solr-${VERSION}-shaded.jar

${VERSION} will be something like 3.5.6-SNAPSHOT, for development builds.

The first .jar is what you’d want to use if you were using spark-solr in your own project. The second is what you’d use to submit one of the included example apps to Spark.

Features

  • Send objects from a Spark (Streaming or DataFrames) into Solr.

  • Read the results from a Solr query as a Spark RDD or DataFrame.

  • Stream documents from Solr using /export handler (only works for exporting fields that have docValues enabled).

  • Read large result sets from Solr using cursors or with /export handler.

  • Data locality. If Spark workers and Solr processes are co-located on the same nodes, the partitions are placed on the nodes where the replicas are located.

Querying

Cursors

Cursors are used by default to pull documents out of Solr. By default, the number of tasks allocated will be the number of shards available for the collection.

If your Spark cluster has more available executor slots than the number of shards, then you can increase parallelism when reading from Solr by splitting each shard into sub ranges using a split field. A good candidate for the split field is the version field that is attached to every document by the shard leader during indexing. See splits section to enable and configure intra shard splitting.

Cursors won’t work if the index changes during the query time. Constrain your query to a static index by using additional Solr parameters using solr.params.

Streaming API (/export)

If the fields that are being queried have docValues enabled, then the Streaming API can be used to pull documents from Solr in a true Streaming fashion. This method is 8-10x faster than Cursors. The option request_handler allows you to enable Streaming API via DataFrame.

Indexing

Objects can be sent to Solr via Spark Streaming or DataFrames. The schema is inferred from the DataFrame and any fields that do not exist in Solr schema will be added via Schema API. See ManagedIndexSchemaFactory.

See Index parameters for configuration and tuning.

Configuration and Tuning

The Solr DataSource supports a number of optional parameters that allow you to optimize performance when reading data from Solr. The only required parameters for the DataSource are zkhost and collection.

Query Parameters

query

Probably the most obvious option is to specify a Solr query that limits the rows you want to load into Spark. For instance, if we only wanted to load documents that mention "solr", we would do:

Usage: option("query","body_t:solr")

Default: *:*

If you don’t specify the "query" option, then all rows are read using the "match all documents" query (*:*).

fields

You can use the fields option to specify a subset of fields to retrieve for each document in your results:

Usage: option("fields","id,author_s,favorited_b,…​")

By default, all stored fields for each document are pulled back from Solr.

You can also specify an alias for a field using Solr’s field alias syntax, e.g. author:author_s. If you want to invoke a function query, such as rord(), then you’ll need to provide an alias, e.g. ord_user:ord(user_id). If the return type of the function query is something other than int or long, then you’ll need to specify the return type after the function query, such as: foo:div(sum(x,100),max(y,1)):double

Tip
If you request Solr function queries, then the library must use the /select handler to make the request as exporting function queries through /export is not supported by Solr.

filters

You can use the filters option to set filter queries on Solr query:

Usage: option("filters","firstName:Sam,lastName:Powell")

rows

You can use the rows option to specify the number of rows to retrieve from Solr per request; do not confuse this with max_rows (see below). Behind the scenes, the implementation uses either deep paging cursors or Streaming API and response streaming, so it is usually safe to specify a large number of rows.

To be clear, this is not the maximum number of rows to read from Solr. All matching rows on the backend are read. The rows parameter is the page size.

By default, the implementation uses 1000 rows but if your documents are smaller, you can increase this to 10000. Using too large a value can put pressure on the Solr JVM’s garbage collector.

Usage: option("rows","10000") Default: 1000

max_rows

Limits the result set to a maximum number of rows; only applies when using the /select handler. The library will issue the query from a single task and let Solr do the distributed query processing. In addition, no paging is performed, i.e. the rows param is set to max_rows when querying. Consequently, this option should not be used for large max_rows values, rather you should just retrieve all rows using multiple Spark tasks and then re-sort with Spark if needed.

Usage: option("max_rows", "100") Defalut: None

request_handler

Set the Solr request handler for queries. This option can be used to export results from Solr via /export handler which streams data out of Solr. See Exporting Result Sets for more information.

The /export handler needs fields to be explicitly specified. Please use the fields option or specify the fields in the query.

Usage: option("request_handler", "/export") Default: /select

Increase Read Parallelism using Intra-Shard Splits

If your Spark cluster has more available executor slots than the number of shards, then you can increase parallelism when reading from Solr by splitting each shard into sub ranges using a split field. The sub range splitting enables faster fetching from Solr by increasing the number of tasks in Solr. This should only be used if there are enough computing resources in the Spark cluster.

Shard splitting is disabled by default.

splits

Enable shard splitting on default field _version_.

Usage: option("splits", "true")

Default: false

The above option is equivalent to option("split_field", "_version_")

split_field

The field to split on can be changed using split_field option.

Usage: option("split_field", "id") Default: _version_

splits_per_shard

Behind the scenes, the DataSource implementation tries to split the shard into evenly sized splits using filter queries. You can also split on a string-based keyword field but it should have sufficient variance in the values to allow for creating enough splits to be useful. In other words, if your Spark cluster can handle 10 splits per shard, but there are only 3 unique values in a keyword field, then you will only get 3 splits.

Keep in mind that this is only a hint to the split calculator and you may end up with a slightly different number of splits than what was requested.

Usage: option("splits_per_shard", "30")

Default: 1

flatten_multivalued

This option is enabled by default and flattens multi valued fields from Solr.

Usage: option("flatten_multivalued", "false")

Default: true

dv

The dv option will fetch the docValues that are indexed but not stored by using function queries. Should be used for Solr versions lower than 5.5.0.

Usage: option("dv", "true")

Default: false

skip_non_dv

The skip_non_dv option instructs the solr datasource to skip all fields that are not docValues.

Usage: option("skip_non_dv", "true")

Default: false

sample_seed

The sample_seed option allows you to read a random sample of documents from Solr using the specified seed. This option can be useful if you just need to explore the data before performing operations on the full result set. By default, if this option is provided, a 10% sample size is read from Solr, but you can use the sample_pct option to control the sample size.

Usage: option("sample_seed", "5150")

Default: None

sample_pct

The sample_pct option allows you to set the size of a random sample of documents from Solr; use a value between 0 and 1.

Usage: option("sample_pct", "0.05")

Default: 0.1

solr.params

The solr.params option can be used to specify any arbitrary Solr parameters in the form of a Solr query.

Tip
Don’t use this to pass parameters that are covered by other options, such as fl (use the fields option) or sort. This option is strictly intended for parameters that are NOT covered by other options.

Usage: option("solr.params", "fq=userId:[10 TO 1000]")

Index parameters

soft_commit_secs

If specified, the soft_commit_secs option will be set via SolrConfig API during indexing

Usage: option("soft_commit_secs", "10")

Default: None

commit_within

The commit_within param sets commitWithin on the indexing requests processed by SolrClient. This value should be in milliseconds. See commitWithin

Usage: option("commit_within", "5000")

Default: None

batch_size

The batch_size option determines the number of documents that are sent to Solr via a HTTP call during indexing. Set this option higher if the docs are small and memory is available.

Usage: option("batch_size", "10000")

Default: 500

gen_uniq_key

If the documents are missing the unique key (derived from Solr schema), then the gen_uniq_key option will generate a unique value for each document before indexing to Solr. Instead of this option, the UUIDUpdateProcessorFactory can be used to generate UUID values for documents that are missing the unique key field

Usage: option("gen_uniq_key", "true")

Default: false

solr_field_types

This option can used to specify field type for fields written to Solr. Only works if the field names are not already defined in Solr schema

Usage: option("solr_field_types", "rating:string,title:text_en"

Querying Time Series Data

partition_by

Set this option as time, in order to query mutiple time series collections, partitioned according to some time period

Usage: option("partition_by", "time")

Default:none

time_period

This is of the form X DAYS/HOURS/MINUTES.This should be the time period with which the partitions are created.

Usage: option("time_period", "1MINUTES")

Default: 1DAYS

datetime_pattern

This pattern can be inferred from time_period. But this option can be used to explicitly specify.

Usage: option("datetime_pattern", "yyyy_MM_dd_HH_mm")

Default: yyyy_MM_dd

timestamp_field_name

This option is used to specify the field name in the indexed documents where time stamp is found.

Usage: option("timestamp_field_name", "ts")

Default: timestamp_tdt

timezone_id

Used to specify the timezone.

Usage: option("timezone_id", "IST")

Default: UTC

max_active_partitions

This option is used to specify the maximum number of partitions that must be allowed at a time.

Usage: option("max_active_partitions", "100")

Default: null

Troubleshooting Tips

Why is dataFrame.count so slow?

Solr can provide the number of matching documents nearly instantly, so why is calling count on a DataFrame backed by a Solr query so slow? The reason is that Spark likes to read all rows before performing any operations on a DataFrame. So when you ask SparkSQL to count the rows in a DataFrame, spark-solr has to read all matching documents from Solr and then count the rows in the RDD.

If you’re just exploring a Solr collection from Spark and need to know the number of matching rows for a query, you can use SolrQuerySupport.getNumDocsFromSolr utility function.

I set rows to 10 and now my job takes forever to read 10 rows from Solr!

The rows option sets the page size, but all matching rows are read from Solr for every query. So if your query matches many documents in Solr, then Spark is reading them all 10 docs per request.

Use the sample_seed option to limit the size of the results returned from Solr.

Developing a Spark Application

The com.lucidworks.spark.SparkApp provides a simple framework for implementing Spark applications in Java. The class saves you from having to duplicate boilerplate code needed to run a Spark application, giving you more time to focus on the business logic of your application.

To leverage this framework, you need to develop a concrete class that either implements RDDProcessor or extends StreamProcessor depending on the type of application you’re developing.

RDDProcessor

Implement the com.lucidworks.spark.SparkApp$RDDProcessor interface for building a Spark application that operates on a JavaRDD, such as one pulled from a Solr query (see SolrQueryProcessor as an example).

StreamProcessor

Extend the com.lucidworks.spark.SparkApp$StreamProcessor abstract class to build a Spark streaming application.

See com.lucidworks.spark.example.streaming.oneusagov.OneUsaGovStreamProcessor or com.lucidworks.spark.example.streaming.TwitterToSolrStreamProcessor for examples of how to write a StreamProcessor.

Authenticating with Solr

For background on Solr security, see: Securing Solr.

Kerberos

The Kerberos config should be set via system param java.security.auth.login.config on extraJavaOptions for both executor and driver.

SparkApp

The SparkApp framework (in spark-solr) allows you to pass the path to a JAAS authentication configuration file using the -solrJaasAuthConfig option.

For example, if you need to authenticate using the "solr" Kerberos principal, you need to create a JAAS configuration file named jaas-client.conf that sets the location of your Kerberos keytab file, such as:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/keytabs/solr.keytab"
  storeKey=true
  useTicketCache=true
  debug=true
  principal="solr";
};

To use this configuration to authenticate to Solr, you simply need to pass the path to jaas-client.conf created above using the -solrJaasAuthConfig option, such as:

spark-submit --master yarn-server \
  --class com.lucidworks.spark.SparkApp \
  $SPARK_SOLR_PROJECT/target/spark-solr-${VERSION}-shaded.jar \
  hdfs-to-solr -zkHost $ZK -collection spark-hdfs \
  -hdfsPath /user/spark/testdata/syn_sample_50k \
  -solrJaasAuthConfig=/path/to/jaas-client.conf

Basic Auth

Basic auth can be configured via System properties basicauth or solr.httpclient.config. These system properties have to be set on Driver and Executor JVMs

Examples:

Using basicauth

 ./bin/spark-shell --master local[*] --jars ~/Git/spark-solr/target/spark-solr-3.0.1-SNAPSHOT-shaded.jar  --conf 'spark.driver.extraJavaOptions=-Dbasicauth=solr:SolrRocks'

Using solr.httpclient.config

 ./bin/spark-shell --master local[*] --jars ~/Git/spark-solr/target/spark-solr-3.0.1-SNAPSHOT-shaded.jar  --conf 'spark.driver.extraJavaOptions=-Dsolr.httpclient.config=/Users/kiran/spark/spark-2.1.0-bin-hadoop2.7/auth.txt'

Contents of config file

httpBasicAuthUser=solr
httpBasicAuthPassword=SolrRocks

spark-solr's People

Contributors

cpoerschke avatar ctargett avatar dependabot[bot] avatar epheatt avatar ganeshk7 avatar gerlowskija avatar hyukjinkwon avatar ian-thebridge-lucidworks avatar jakemannix avatar janplus avatar jeisinge avatar jlleitschuh avatar joel-bernstein avatar juanmillan85 avatar kiranchitturi avatar luis-munoz avatar makuk66 avatar mehtakash93 avatar mrt avatar nddipiazza avatar rajeshwrn avatar risdenk avatar sarowe avatar thelabdude avatar theoathinas avatar tusciucalecs avatar vetler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-solr's Issues

Twitter example not working?

System using solr 5.1 solrCloud. Created collection using one of the samples in 5.1 - solr-5.1.0/server/solr/configsets/data_driven_schema_configs/conf/.

Set up my twitter API stuff and launched:

./spark-submit --master local[2] --class com.lucidworks.spark.SparkApp /opt/sparkSolr/spark-solr-master/target/spark-solr-1.0-SNAPSHOT.jar twitter-to-solr -zkHost=foo.com:12181 -collection=SparkTwitter2
2015-05-07 08:26:08,553 [main] INFO SparkApp - Running processor twitter-to-solr
2015-05-07 08:26:09,237 [main] WARN NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-07 08:26:10,049 [sparkDriver-akka.actor.default-dispatcher-7] INFO Slf4jLogger - Slf4jLogger started
2015-05-07 08:26:10,118 [sparkDriver-akka.actor.default-dispatcher-7] INFO Remoting - Starting remoting
2015-05-07 08:26:10,336 [sparkDriver-akka.actor.default-dispatcher-7] INFO Remoting - Remoting started; listening on addresses :[akka.tcp://[email protected]:42899]
2015-05-07 08:26:10,618 [main] INFO Server - jetty-8.y.z-SNAPSHOT
2015-05-07 08:26:10,639 [main] INFO AbstractConnector - Started [email protected]:36168
2015-05-07 08:26:10,836 [main] INFO Server - jetty-8.y.z-SNAPSHOT
2015-05-07 08:26:10,866 [main] INFO AbstractConnector - Started [email protected]:4040
2015-05-07 08:26:13,427 [Twitter Stream consumer-1[initializing]] INFO TwitterStreamImpl - Establishing connection.
2015-05-07 08:26:14,833 [Twitter Stream consumer-1[Establishing connection]] INFO TwitterStreamImpl - Connection established.
2015-05-07 08:26:14,834 [Twitter Stream consumer-1[Establishing connection]] INFO TwitterStreamImpl - Receiving status stream.
2015-05-07 08:26:16,145 [Executor task launch worker-1] INFO SolrZkClient - Using default ZkCredentialsProvider
2015-05-07 08:26:16,179 [Executor task launch worker-1] INFO ConnectionManager - Waiting for client to connect to ZooKeeper
2015-05-07 08:26:16,195 [zkCallback-2-thread-1] INFO ConnectionManager - Watcher org.apache.solr.common.cloud.ConnectionManager@473b5792 name:ZooKeeperConnection Watcher:foo.com:12181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
2015-05-07 08:26:16,195 [Executor task launch worker-1] INFO ConnectionManager - Client is connected to ZooKeeper
2015-05-07 08:26:16,195 [Executor task launch worker-1] INFO SolrZkClient - Using default ZkACLProvider
2015-05-07 08:26:16,197 [Executor task launch worker-1] INFO ZkStateReader - Updating cluster state from ZooKeeper...

Launched the Spark GUI and see all kinds of completed jobs, stages, and batches processed. No apparent failures. I run a query in the Solr Admin UI and don't have any results in my collection.

What should I look for here?

Tks!

Twitter example not working

hduser@abhi-VirtualBox:~$ /usr/local/spark/bin/spark-submit --master local[2] --class com.lucidworks.spark.SparkApp spark-solr-1.0-SNAPSHOT.jar twitter-to-solr -zkHost localhost:9983 -collection collection1
2015-05-26 17:01:39,459 [main] INFO SparkApp - Running processor twitter-to-solr
2015-05-26 17:01:40,134 [main] WARN Utils - Your hostname, abhi-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.10.10.241 instead (on interface eth0)
2015-05-26 17:01:40,137 [main] WARN Utils - Set SPARK_LOCAL_IP if you need to bind to another address
2015-05-26 17:01:46,469 [main] WARN NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-26 17:01:48,498 [sparkDriver-akka.actor.default-dispatcher-3] INFO Slf4jLogger - Slf4jLogger started
2015-05-26 17:01:48,694 [sparkDriver-akka.actor.default-dispatcher-3] INFO Remoting - Starting remoting
2015-05-26 17:01:49,416 [sparkDriver-akka.actor.default-dispatcher-3] INFO Remoting - Remoting started; listening on addresses :[akka.tcp://[email protected]:49486]
2015-05-26 17:01:50,165 [main] INFO Server - jetty-8.y.z-SNAPSHOT
2015-05-26 17:01:50,241 [main] INFO AbstractConnector - Started [email protected]:50090
2015-05-26 17:01:52,682 [main] INFO Server - jetty-8.y.z-SNAPSHOT
2015-05-26 17:01:52,706 [main] INFO AbstractConnector - Started [email protected]:4040
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/twitter/TwitterUtils
at com.lucidworks.spark.example.streaming.TwitterToSolrStreamProcessor.plan(TwitterToSolrStreamProcessor.java:36)
at com.lucidworks.spark.SparkApp$StreamProcessor.run(SparkApp.java:77)
at com.lucidworks.spark.SparkApp.main(SparkApp.java:143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.streaming.twitter.TwitterUtils
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 12 more

I have spark 1.3.1, solr 5.1.0 and hadoop 2.6. Please help

many rows! exception

I use query {!terms f=id}id1,id2...
Maybe have 2000-5000 ids,then run the exception

Most likely this means your query's sort criteria is not generating stable results for computing deep-paging cursors, has the index changed? If so, try using a filter criteria the bounds the results to non-changing data.

I don't how to solve it!
can you help me?

Email:[email protected]

jar in spark-1.6.0

hello,
my spark version is 1.6.0
and add the jar to here:export $SPARK_CLASSPATH:$SPARK_HOME/lib/solr.xxx2.0.jar
then I start start-thriftserver.sh ....
so,I can search from solr_table by spark-sql or beeline.
but I use spark-shell, will not find a method,the method in scala-reflect 2.10.5,but your jar used is 2.10.4.

then I start with "start-thriftserver.sh --jars solr_xx.jar",I can use spark-shell  and beeline,but  spark-sql  can't search from solr_table

so,should to change it.

thanks

org.scala-lang scala-reflect 2.10.5

solr-core dependency

hello!
i was wondering, why is solr-core a dependency? it brings in a ton of what i believe to be unneeded transitive dependencies.
shouldn solrj be enough to talk to solr?

Exception in thread "main" java.lang.AbstractMethodError

I'm trying to write data from a dataframe to SOLR.

I use CDH 5.9, Spark 1.6.x and Apache SOLR 5.5.1

I'm getting below error, on executing the below commands.
val solrOpts = Map("zkhost" -> zkHost, "collection" -> collection)
df.write.format("solr").options(solrOpts).mode("overwrite").save()

Exception in thread "main" java.lang.AbstractMethodError at org.apache.spark.Logging$class.log(Logging.scala:50) at com.lucidworks.spark.util.SolrJsonSupport$.log(SolrJsonSupport.scala:22) at com.lucidworks.spark.util.SolrJsonSupport$.getJson(SolrJsonSupport.scala:71) at com.lucidworks.spark.util.SolrJsonSupport$.getJson(SolrJsonSupport.scala:34) at com.lucidworks.spark.util.SolrQuerySupport$.getUniqueKey(SolrQuerySupport.scala:86) at com.lucidworks.spark.rdd.SolrRDD.<init>(SolrRDD.scala:32) at com.lucidworks.spark.SolrRelation.<init>(SolrRelation.scala:53) at solr.DefaultSource.createRelation(DefaultSource.scala:26) at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148) at my.mimos.nsl.pistachio.solr.SolrUtils.solrOverWrite(Solr.scala:34) at my.mimos.nsl.pistachio.solr.SolrUtils.solrWrite(Solr.scala:18) at my.mimos.nsl.pistachio.citizen.CImage$delayedInit$body.apply(image.scala:135) at scala.Function0$class.apply$mcV$sp(Function0.scala:40) at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32) at scala.App$class.main(App.scala:71) at my.mimos.nsl.pistachio.citizen.CImage$.main(image.scala:35) at my.mimos.nsl.pistachio.citizen.CImage.main(image.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Any help would be really appreciated.
Regards,
Ady

com.google.common.util.concurrent.UncheckedExecutionException: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper

I am using ensemble 3 node zookeeper, getting below error while ruuning hdfs-to-solr example .

: com.google.common.util.concurrent.UncheckedExecutionException: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to
zookepereper xxxxxx:2181,xxxxxx:2181,xxxxxx:2181 within 10000 ms
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at com.lucidworks.spark.util.SolrSupport$.getCachedCloudClient(SolrSupport.scala:93)
at com.lucidworks.spark.util.SolrSupport$$anonfun$indexDocs$1.apply(SolrSupport.scala:153)
at com.lucidworks.spark.util.SolrSupport$$anonfun$indexDocs$1.apply(SolrSupport.scala:152)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:878)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:878)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper xxxxxx:2181,xxxxxxx:2181,xxxxxx:2181 within 10000 ms
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:179)
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:113)
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:103)
at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:228)
at org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:454)
at com.lucidworks.spark.util.SolrSupport$.getSolrCloudClient(SolrSupport.scala:83)
at com.lucidworks.spark.util.SolrSupport$.getNewSolrCloudClient(SolrSupport.scala:89)
at com.lucidworks.spark.util.CacheSolrClient$$anon$1.load(SolrSupport.scala:38)
at com.lucidworks.spark.util.CacheSolrClient$$anon$1.load(SolrSupport.scala:36)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
... 16 more

Request to collection failed due to (405)

16/06/08 17:30:27 ERROR CloudSolrClient: Request to collection core-schemaless failed due to (405) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://MyServer-Node:8983/solr/core-schemaless: Expected mime type application/octet-stream but got text/html.

I created "core-schemaless" and trying to Index my HBase data creating dataframe of it "docdf" and :

docdf.write.format("solr").options(options).mode(org.apache.spark.sql.SaveMode.Overwrite).save

write to server doesnt work with solr 4.4

the write solr input document throw error with server 4.4 (solr.add(...) or solr.request(updateRequest)),
it throw error Unknown type 19
is it possible to run the code with previous versions of solr?

Get java.lang.ClassNotFoundException when getting result from spark

Hi, guys,
I am writing a simple java application to read data from solr using spark.
my app dependencies are:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.10</artifactId>
    <version>1.6.2</version>
</dependency>
<dependency>
    <groupId>com.lucidworks.spark</groupId>
    <artifactId>spark-solr</artifactId>
    <version>2.1.0</version>
</dependency>

and I have a spark standlone deployment

The main code is
`
SparkConf conf = new SparkConf().setAppName("solr").setMaster("spark://ubuntu:7077");

    JavaSparkContext sc = new JavaSparkContext(conf);

    SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);

    Map<String,String> options = new HashMap<String,String>();

    options.put("zkhost", "localhost:2181");

    options.put("collection", "kapner-persons");

    DataFrame persons = sqlContext.read().format("solr").options(options).load();

    persons.printSchema();

    persons.show();

`
Everything works fine before I call persons.show(); and the schema can be printed normally.
But I got java.lang.ClassNotFoundException when calling persons.show();

below is the error trace:

2016-09-07 21:20:26,169 [task-result-getter-0] WARN TaskSetManager - Lost task 0.0 in stage 0.0 (TID 0, 192.168.195.232): java.lang.ClassNotFoundException: com.lucidworks.spark.ShardRDDPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

2016-09-07 21:20:26,373 [task-result-getter-3] ERROR TaskSetManager - Task 0 in stage 0.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 192.168.195.232): java.lang.ClassNotFoundException: com.lucidworks.spark.ShardRDDPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)
at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
at com.incpad.exesearch.exesearch.App.main(App.java:29)
Caused by: java.lang.ClassNotFoundException: com.lucidworks.spark.ShardRDDPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I don't know what I did wrong.
Or did I miss someting?

Thanks

Using spark-solr with scala 2.11

I am using spark with scala 2.11 and trying to read data from solr but it says spark-solr is compiled with scala 2.10. How to compile spark-solr with scala 2.11?

spark-solr Kerberos support

Looks like there are 2 issues with Kerberos support:

  1. SolrSupport setupKerberosIfNeeded method logic isn't correct
    https://github.com/lucidworks/spark-solr/blob/master/src/main/scala/com/lucidworks/spark/util/SolrSupport.scala#L66
       if (configurer.get.isInstanceOf[Krb5HttpClientConfigurer]) {

should be

       if (!configurer.get.isInstanceOf[Krb5HttpClientConfigurer]) {
  1. SolrJsonSupport getHttpClient method doesn't setup Kerberos
    https://github.com/lucidworks/spark-solr/blob/master/src/main/scala/com/lucidworks/spark/util/SolrJsonSupport.scala#L164

I think it should have a SolrSupport.setupKerberosIfNeeded() call

def getHttpClient(): HttpClient = {
    SolrSupport.setupKerberosIfNeeded()

    val params = new ModifiableSolrParams()

I can open a PR if either of these fixes make sense.

SQL IN(ID1,ID2) not support

SPARK-SQL:select * from TT where id in ('00000525','00000174') limit 1;
not find result,
but do this is oK:
select * from TT where id in ('00000525'') limit 1;
select * from TT where id in ('00000174') limit 1;
select * from TT where id='00000525' or id='00000174' limit 1;

"java.lang.VerifyError: Bad return type" runnig spark-solr in Scala on Cloudera-5.5

Apologies if this is an error in my setup rather than an issue with the code, but I am not sure myself.

I am trying to ingest documents into Solr from a Spark Streaming job in Scala.

This is the Solr related code block:

    val docs = kafkaStream.flatMap(msg => {
      Json.parse(msg._2).as[JsArray].value.flatMap(x => x.as[JsArray].value).map(x => {
        val doc:SolrInputDocument = new common.SolrInputDocument()
        doc.setField("id", (x \ "id").as[String])
        doc.setField("description", (x \ "description").as[String])
        doc
      })
    })
    SolrSupport.indexDStreamOfDocs("hdp-vm-02:2181,hdp-vm-02:2181/solr", "descriptions_all", 10, docs)

When I run this via spark-submit the job fails straight away because of a java.lang.VerifyError. I presume I am mismatching versions of code somewhere here to get the following error:

java.lang.VerifyError: Bad return type
Exception Details:
  Location:
    org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;)Lorg/apache/http/impl/client/CloseableHttpClient; @57: areturn
  Reason:
    Type 'org/apache/http/impl/client/SystemDefaultHttpClient' (current frame, stack[0]) is not assignable to 'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
  Current Frame:
    bci: @57
    flags: { }
    locals: { 'org/apache/solr/common/params/SolrParams', 'org/apache/solr/common/params/ModifiableSolrParams', 'org/apache/http/impl/client/SystemDefaultHttpClient' }
    stack: { 'org/apache/http/impl/client/SystemDefaultHttpClient' }
  Bytecode:
    0000000: bb00 0359 2ab7 0004 4cb2 0005 b900 0601
    0000010: 0099 001e b200 05bb 0007 59b7 0008 1209
    0000020: b600 0a2b b600 0bb6 000c b900 0d02 00b8
    0000030: 000e 4d2c 2bb8 000f 2cb0               
  Stackmap Table:
    append_frame(@47,Object[#141])

    at org.apache.solr.client.solrj.impl.CloudSolrClient.<init>(CloudSolrClient.java:195)
    at com.lucidworks.spark.SolrSupport.getSolrServer(SolrSupport.java:81)
    at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:212)
    at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:210)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:222)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:222)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:898)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:898)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

From what I can tell the SystemDefaultHttpClient extends from CloseableHttpClient so I don't see why this fails. Does anyone have an idea what is wrong?

I am using:

  • Cloudera 5.5
  • Solr 4.10.3-CDH-5.5
  • I can connect to zookeeper on those nodes and see data in the /solr root
  • The collection I have given exists, in zookeeper and the Solr UI

Dependencies from build.sbt:

libraryDependencies ++= Seq(
  "com.typesafe" % "config" % "1.3.0",
  "com.datastax.spark" %% "spark-cassandra-connector" % "1.5.0-M1",
  "org.apache.spark" % "spark-streaming-kafka_2.10"  % "1.5.1" % "provided",
  "org.apache.spark" %% "spark-core" % "1.5.1" % "provided",
  "org.apache.spark" %% "spark-streaming" % "1.5.1" % "provided",
  "org.apache.spark" %% "spark-graphx" % "1.5.1" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.5.1" % "provided",
  "org.json4s" % "json4s-jackson_2.10" % "3.2.11",
  "com.typesafe.play" %% "play-json" % "2.3.4",
  "com.fasterxml.uuid" % "java-uuid-generator" % "3.1.4",
  "joda-time" % "joda-time" % "2.9.1",
  "com.databricks" %% "spark-avro" % "2.0.1",
  "com.lucidworks.spark" % "spark-solr" % "1.1.2",
  "org.apache.solr" % "solr-core" % "4.10.3",
  "org.apache.solr" % "solr-common" % "1.3.0"
)

Thanks,
Donal

At Solr datasource, support for null-safe equality comparison.

I wanted to send a email first but I could not find a proper email address so I opened an issue here.

I assume that this code has the support for Spark 1.5. For Spark 1.5, EqualNullSafe can be pushed down as a filter operator. (https://issues.apache.org/jira/browse/SPARK-9814)

I think this can be easily done adding combination of equality comparison query and [* TO *] adding a if case at SolrRelation.fq() in com.lucidworks.spark

Could I open a PR for this?

CDH Spark job fails with java.lang.VerifyError at solrj.impl.HttpClientUtil

I'm running CDH 5.7.1 and submitting a Spark job built with spark-solr 2.0.1 in yarn-cluster mode and the job is failing with the following error that arises from a binary compatibility issue in org.apache.solr.client.solrj.impl.HttpClientUtil. The job runs fine in local mode allowing me to index a dataframe to Solr 6.1.0.

Is spark-solr supported with the Cloudera distribution of Spark (here 1.6.0-cdh5.7.1) in combination with Solr 6.1.0 running on YARN?

It may be that an old version of org.apache.httpcomponents:httpclient is being picked up from the Spark classpath, but I'm not sure. I've tried various things such as specifying the --jars argument with spark-submit to provide the appropriate solr-solrj and httpclient jars with no luck, as well as numerous build.sbt changes to specify versions of solrj and httpclient explicitly, again with no luck.

Any insights would be greatly appreciated.

Spark job history error from yarn-cluster mode:

16/07/21 02:10:07 ERROR yarn.ApplicationMaster: User class threw exception: com.google.common.util.concurrent.ExecutionError: java.lang.VerifyError: Bad return type
Exception Details:
  Location:
    org/apache/solr/client/solrj/impl/HttpClientUtil.createClient(Lorg/apache/solr/common/params/SolrParams;Lorg/apache/http/conn/ClientConnectionManager;)Lorg/apache/http/impl/client/CloseableHttpClient; @58: areturn
  Reason:
    Type 'org/apache/http/impl/client/DefaultHttpClient' (current frame, stack[0]) is not assignable to 'org/apache/http/impl/client/CloseableHttpClient' (from method signature)
  Current Frame:
    bci: @58
    flags: { }
    locals: { 'org/apache/solr/common/params/SolrParams', 'org/apache/http/conn/ClientConnectionManager', 'org/apache/solr/common/params/ModifiableSolrParams', 'org/apache/http/impl/client/DefaultHttpClient' }
    stack: { 'org/apache/http/impl/client/DefaultHttpClient' }
  Bytecode:
    0x0000000: bb00 0359 2ab7 0004 4db2 0005 b900 0601
    0x0000010: 0099 001e b200 05bb 0007 59b7 0008 1209
    0x0000020: b600 0a2c b600 0bb6 000c b900 0d02 002b
    0x0000030: b800 104e 2d2c b800 0f2d b0            
  Stackmap Table:
    append_frame(@47,Object[#143])
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2232)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
    at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
    at com.lucidworks.spark.util.SolrSupport$.getCachedCloudClient(SolrSupport.scala:93)
    at com.lucidworks.spark.util.SolrSupport$.getSolrBaseUrl(SolrSupport.scala:97)
    at com.lucidworks.spark.util.SolrQuerySupport$.getUniqueKey(SolrQuerySupport.scala:82)
    at com.lucidworks.spark.rdd.SolrRDD.<init>(SolrRDD.scala:32)
    at com.lucidworks.spark.SolrRelation.<init>(SolrRelation.scala:63)
    at solr.DefaultSource.createRelation(DefaultSource.scala:26)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)
    at com.tempurer.intelligence.searchindexer.SolrIndexer3$.runJob(SolrIndexer3.scala:127)
    at com.tempurer.intelligence.searchindexer.SolrIndexer3$.main(SolrIndexer3.scala:80)
    at com.tempurer.intelligence.searchindexer.SolrIndexer3.main(SolrIndexer3.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
Caused by: java.lang.VerifyError: Bad return type

I'm building the Spark app as a fat jar with the sbt-assembly plugin and here is my build.sbt file:

name := "search-indexer"
version := "0.1.0-SNAPSHOT"
scalaVersion := "2.10.6"

resolvers ++= Seq(
  "Cloudera CDH 5.0"        at "https://repository.cloudera.com/artifactory/cloudera-repos"
)

libraryDependencies ++= Seq(
  "org.apache.hadoop"           % "hadoop-common"           % "2.6.0-cdh5.7.0" % "provided",
  "org.apache.hadoop"           % "hadoop-hdfs"             % "2.6.0-cdh5.7.0" % "provided",
  "org.apache.hive"             % "hive-exec"               % "1.1.0-cdh5.7.0",
  "org.apache.spark"            % "spark-core_2.10"         % "1.6.0-cdh5.7.0" % "provided",
  "org.apache.spark"            % "spark-sql_2.10"          % "1.6.0-cdh5.7.0" % "provided",
  "org.apache.spark"            % "spark-catalyst_2.10"     % "1.6.0-cdh5.7.0" % "provided",
  "org.apache.spark"            % "spark-mllib_2.10"        % "1.6.0-cdh5.7.0" % "provided",
  "org.apache.spark"            % "spark-graphx_2.10"       % "1.6.0-cdh5.7.0" % "provided",
  "org.apache.spark"            % "spark-streaming_2.10"    % "1.6.0-cdh5.7.0" % "provided",
  "com.databricks"              % "spark-avro_2.10"         % "2.0.1",
  "com.databricks"              % "spark-csv_2.10"          % "1.4.0",
    "com.lucidworks.spark"        % "spark-solr"              % "2.0.1",
  "com.fasterxml.jackson.core"  % "jackson-core"            % "2.8.0",  // Solves runtime error: java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering()Z
  "org.scalatest"               % "scalatest_2.10"          % "2.2.4"          % "test"
)

// See: https://github.com/sbt/sbt-assembly
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
   {
    case PathList("META-INF", xs @ _*) => MergeStrategy.discard
    case x => MergeStrategy.first
   }
}

Here is the logged YARN context on the server:

YARN executor launch context:
  env:
    CLASSPATH -> {{HADOOP_COMMON_HOME}}/../../../CDH/lib/hbase/lib/htrace-core-3.1.0-incubating.jar<CPS>{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/lib/spark/lib/spark-assembly.jar<CPS>$HADOOP_CLIENT_CONF_DIR<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$MR2_CLASSPATH<CPS>{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/ST4-4.0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-core-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-fate-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-start-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-trace-1.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/activation-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/ant-1.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/ant-launcher-1.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/antlr-2.7.7.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/antlr-runtime-3.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aopalliance-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apache-log4j-extras-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apache-log4j-extras-1.2.17.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apacheds-i18n-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apacheds-kerberos-codec-2.0.0-M15.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/api-asn1-api-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/api-util-1.0.0-M20.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asm-3.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asm-commons-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asm-tree-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/async-1.4.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asynchbase-1.5.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-compiler-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-ipc-1.7.6-cdh5.7.1-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-ipc-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-mapred-1.7.6-cdh5.7.1-hadoop2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-maven-plugin-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-protobuf-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-service-archetype-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-thrift-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aws-java-sdk-core-1.10.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aws-java-sdk-kms-1.10.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aws-java-sdk-s3-1.10.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/bonecp-0.8.0.RELEASE.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/calcite-avatica-1.0.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/calcite-core-1.0.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/calcite-linq4j-1.0.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-beanutils-1.7.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-beanutils-core-1.8.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-cli-1.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-codec-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-codec-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-collections-3.2.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-compiler-2.7.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-compress-1.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-configuration-1.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-daemon-1.0.13.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-dbcp-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-digester-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-el-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-httpclient-3.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-httpclient-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-io-2.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-jexl-2.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-lang-2.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-logging-1.1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-math-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-math3-3.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-net-3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-pool-1.5.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-vfs2-2.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-client-2.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-client-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-framework-2.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-framework-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-recipes-2.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-recipes-2.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/datanucleus-api-jdo-3.2.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/datanucleus-core-3.2.10.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/datanucleus-rdbms-3.2.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/derby-10.11.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/eigenbase-properties-1.1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/fastutil-6.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/findbugs-annotations-1.3.9-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-avro-source-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-dataset-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-file-channel-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-hdfs-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-hive-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-irc-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-jdbc-channel-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-jms-source-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-kafka-channel-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-kafka-source-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-auth-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-configuration-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-core-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-elasticsearch-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-embedded-agent-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-hbase-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-kafka-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-log4jappender-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-morphline-solr-sink-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-node-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-sdk-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-scribe-source-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-spillable-memory-channel-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-taildir-source-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-thrift-source-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-tools-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-twitter-source-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/geronimo-annotation_1.0_spec-1.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/geronimo-jaspic_1.0_spec-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/geronimo-jta_1.1_spec-1.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/groovy-all-2.4.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/gson-2.2.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guava-11.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guava-11.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guava-14.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guice-3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guice-servlet-3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-annotations-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-ant-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-archive-logs-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-archives-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-auth-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-aws-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-azure-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-common-2.6.0-cdh5.7.1-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-common-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-datajoin-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-distcp-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-extras-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-gridmix-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-hdfs-2.6.0-cdh5.7.1-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-hdfs-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-hdfs-nfs-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.1-tests.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-examples-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-nfs-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-openstack-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-rumen-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-sls-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-streaming-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-api-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-client-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-common-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-registry-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-common-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-tests-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hamcrest-core-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hamcrest-core-1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-annotations-1.2.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-client-1.2.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-common-1.2.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-hadoop-compat-1.2.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-hadoop2-compat-1.2.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-protocol-1.2.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-server-1.2.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/high-scale-lib-1.1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-accumulo-handler-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-ant-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-beeline-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-cli-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-common-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-contrib-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-exec-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-hbase-handler-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-hwi-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-jdbc-1.1.0-cdh5.7.1-standalone.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-jdbc-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-metastore-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-serde-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-service-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-0.23-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-common-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-scheduler-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-testutils-1.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/htrace-core-3.2.0-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/htrace-core4-4.0.1-incubating.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/httpclient-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/httpcore-4.2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hue-plugins-3.9.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/irclib-1.10.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-annotations-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-core-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-core-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-databind-2.2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-jaxrs-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-mapper-asl-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-xc-1.8.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jamon-runtime-2.3.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/janino-2.7.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jasper-compiler-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jasper-runtime-5.5.23.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/java-xmlbuilder-0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/javax.inject-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jaxb-api-2.2.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jaxb-impl-2.2.3-1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jcommander-1.32.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jdo-api-3.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-client-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-core-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-guice-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-json-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-server-1.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jets3t-0.9.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jettison-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-6.1.26.cloudera.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-all-7.6.0.v20120127.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-all-server-7.6.0.v20120127.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-util-6.1.26.cloudera.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-util-6.1.26.cloudera.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jline-2.11.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jline-2.12.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/joda-time-1.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/joda-time-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jopt-simple-4.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jpam-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsch-0.1.42.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsp-api-2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsr305-1.3.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsr305-3.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jta-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/junit-4.11.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kafka-clients-0.9.0-kafka-2.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kafka_2.10-0.9.0-kafka-2.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-data-core-1.0.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-data-hbase-1.0.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-data-hive-1.0.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-hadoop-compatibility-1.0.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/leveldbjni-all-1.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/libfb303-0.9.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/libthrift-0.9.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/log4j-1.2.16.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/log4j-1.2.17.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/logredactor-1.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/lz4-1.3.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mail-1.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mapdb-0.9.9.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/maven-scm-api-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/maven-scm-provider-svn-commons-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/maven-scm-provider-svnexe-1.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-core-2.2.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-core-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-json-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-jvm-3.0.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mina-core-2.0.4.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mockito-all-1.8.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/netty-3.6.2.Final.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/netty-all-4.0.23.Final.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/opencsv-2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/oro-2.0.8.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/paranamer-2.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-avro-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-cascading-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-column-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-common-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-encoding-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-format-2.1.0-cdh5.7.1-javadoc.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-format-2.1.0-cdh5.7.1-sources.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-format-2.1.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-generator-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-hadoop-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-hadoop-bundle-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-jackson-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-pig-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-pig-bundle-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-protobuf-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-scala_2.10-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-scrooge_2.10-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-test-hadoop2-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-thrift-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-tools-1.5.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/plexus-utils-1.5.6.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/protobuf-java-2.5.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/regexp-1.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/scala-library-2.10.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/serializer-2.7.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/servlet-api-2.5-20110124.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/servlet-api-2.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/slf4j-api-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/slf4j-log4j12-1.7.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/snappy-java-1.0.4.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/spark-1.6.0-cdh5.7.1-yarn-shuffle.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/spark-streaming-flume-sink_2.10-1.6.0-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/stax-api-1.0-2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/stax-api-1.0.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/stringtemplate-3.2.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/super-csv-2.2.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/tempus-fugit-1.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/trevni-avro-1.7.6-cdh5.7.1-hadoop2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/trevni-avro-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/trevni-core-1.7.6-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/twitter4j-core-3.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/twitter4j-media-support-3.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/twitter4j-stream-3.0.3.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/unused-1.0.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/velocity-1.5.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/velocity-1.7.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xalan-2.7.2.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xercesImpl-2.9.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xml-apis-1.3.04.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xmlenc-0.52.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xz-1.0.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/zkclient-0.7.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/jars/zookeeper-3.4.5-cdh5.7.1.jar:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/LICENSE.txt:{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/NOTICE.txt
    SPARK_YARN_CACHE_ARCHIVES -> hdfs://master1:8020/user/root/.sparkStaging/application_1469003912791_0144/__spark_conf__5530597697271269573.zip#__spark_conf__
    SPARK_LOG_URL_STDERR -> http://worker1:8042/node/containerlogs/container_1469003912791_0144_01_000002/root/stderr?start=-4096
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 226590589
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1469003912791_0144
    SPARK_DIST_CLASSPATH -> /opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/ST4-4.0.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-core-1.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-fate-1.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-start-1.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/accumulo-trace-1.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/ant-1.9.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/ant-launcher-1.9.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/antlr-2.7.7.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/antlr-runtime-3.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apache-log4j-extras-1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apache-log4j-extras-1.2.17.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asm-commons-3.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asm-tree-3.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/async-1.4.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/asynchbase-1.5.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-compiler-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-ipc-1.7.6-cdh5.7.1-tests.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-ipc-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-mapred-1.7.6-cdh5.7.1-hadoop2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-maven-plugin-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-protobuf-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-service-archetype-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/avro-thrift-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aws-java-sdk-core-1.10.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aws-java-sdk-kms-1.10.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/aws-java-sdk-s3-1.10.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/bonecp-0.8.0.RELEASE.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/calcite-avatica-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/calcite-core-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/calcite-linq4j-1.0.0-incubating.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-codec-1.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-compiler-2.7.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-dbcp-1.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-httpclient-3.0.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-jexl-2.1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-math-2.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-pool-1.5.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/commons-vfs2-2.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-client-2.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-framework-2.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-recipes-2.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/datanucleus-api-jdo-3.2.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/datanucleus-core-3.2.10.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/datanucleus-rdbms-3.2.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/derby-10.11.1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/eigenbase-properties-1.1.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/fastutil-6.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/findbugs-annotations-1.3.9-1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-avro-source-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-dataset-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-file-channel-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-hdfs-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-hive-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-irc-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-jdbc-channel-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-jms-source-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-kafka-channel-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-kafka-source-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-auth-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-configuration-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-core-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-elasticsearch-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-embedded-agent-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-hbase-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-kafka-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-log4jappender-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-morphline-solr-sink-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-node-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-ng-sdk-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-scribe-source-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-spillable-memory-channel-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-taildir-source-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-thrift-source-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-tools-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/flume-twitter-source-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/geronimo-jaspic_1.0_spec-1.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/geronimo-jta_1.1_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/groovy-all-2.4.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guava-11.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guava-14.0.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-annotations-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-ant-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-archive-logs-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-archives-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-auth-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-aws-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-azure-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-common-2.6.0-cdh5.7.1-tests.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-common-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-datajoin-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-distcp-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-extras-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-gridmix-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-hdfs-2.6.0-cdh5.7.1-tests.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-hdfs-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-hdfs-nfs-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.1-tests.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-mapreduce-examples-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-nfs-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-openstack-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-rumen-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-sls-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-streaming-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-api-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-client-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-common-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-registry-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-common-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-tests-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hamcrest-core-1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-annotations-1.2.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-client-1.2.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-common-1.2.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-hadoop-compat-1.2.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-hadoop2-compat-1.2.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-protocol-1.2.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hbase-server-1.2.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/high-scale-lib-1.1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-accumulo-handler-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-ant-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-beeline-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-cli-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-common-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-contrib-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-exec-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-hbase-handler-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-hwi-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-jdbc-1.1.0-cdh5.7.1-standalone.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-jdbc-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-metastore-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-serde-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-service-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-0.23-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-common-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-shims-scheduler-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hive-testutils-1.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/htrace-core-3.2.0-incubating.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/hue-plugins-3.9.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/irclib-1.10.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jamon-runtime-2.3.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/janino-2.7.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jcommander-1.32.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jdo-api-3.0.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-all-7.6.0.v20120127.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-all-server-7.6.0.v20120127.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-util-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jline-2.12.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/joda-time-1.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/joda-time-2.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jopt-simple-4.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jpam-1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/jta-1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kafka-clients-0.9.0-kafka-2.0.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kafka_2.10-0.9.0-kafka-2.0.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-data-core-1.0.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-data-hbase-1.0.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-data-hive-1.0.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/kite-hadoop-compatibility-1.0.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/libfb303-0.9.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/libthrift-0.9.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/log4j-1.2.16.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/lz4-1.3.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mail-1.4.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mapdb-0.9.9.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/maven-scm-api-1.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/maven-scm-provider-svn-commons-1.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/maven-scm-provider-svnexe-1.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-core-2.2.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-core-3.0.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-json-3.0.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/metrics-jvm-3.0.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mina-core-2.0.4.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/netty-all-4.0.23.Final.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/opencsv-2.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/oro-2.0.8.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-avro-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-cascading-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-column-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-common-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-encoding-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-format-2.1.0-cdh5.7.1-javadoc.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-format-2.1.0-cdh5.7.1-sources.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-format-2.1.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-generator-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-hadoop-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-hadoop-bundle-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-jackson-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-pig-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-pig-bundle-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-protobuf-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-scala_2.10-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-scrooge_2.10-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-test-hadoop2-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-thrift-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/parquet-tools-1.5.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/plexus-utils-1.5.6.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/regexp-1.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/scala-library-2.10.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/serializer-2.7.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/servlet-api-2.5-20110124.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/slf4j-log4j12-1.7.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/spark-1.6.0-cdh5.7.1-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/spark-streaming-flume-sink_2.10-1.6.0-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/stax-api-1.0.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/stringtemplate-3.2.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/super-csv-2.2.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/tempus-fugit-1.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/trevni-avro-1.7.6-cdh5.7.1-hadoop2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/trevni-avro-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/trevni-core-1.7.6-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/twitter4j-core-3.0.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/twitter4j-media-support-3.0.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/twitter4j-stream-3.0.3.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/unused-1.0.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/velocity-1.5.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/velocity-1.7.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xalan-2.7.2.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xercesImpl-2.9.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xml-apis-1.3.04.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/zkclient-0.7.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/jars/zookeeper-3.4.5-cdh5.7.1.jar:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/LICENSE.txt:/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/NOTICE.txt
    SPARK_YARN_CACHE_ARCHIVES_FILE_SIZES -> 30977
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE
    SPARK_USER -> root
    SPARK_YARN_CACHE_ARCHIVES_TIME_STAMPS -> 1469081399503
    SPARK_YARN_MODE -> true
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1469081399409
    SPARK_LOG_URL_STDOUT -> http://worker1:8042/node/containerlogs/container_1469003912791_0144_01_000002/root/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> hdfs://master1:8020/user/root/.sparkStaging/application_1469003912791_0144/search-indexer-assembly-0.1.0-SNAPSHOT.jar#__app__.jar
    SPARK_YARN_CACHE_ARCHIVES_VISIBILITIES -> PRIVATE

  command:
    LD_LIBRARY_PATH="{{HADOOP_COMMON_HOME}}/../../../CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms16384m -Xmx16384m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.authenticate=false' '-Dspark.driver.port=53332' '-Dspark.shuffle.service.port=7337' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:53332 --executor-id 1 --hostname worker1 --cores 1 --app-id application_1469003912791_0144 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr

error: bad symbolic reference.

Hi everyone,
i'm getting this error on spark scala shell.

scala> import com.lucidworks.spark.rdd.SolrRDD
import com.lucidworks.spark.rdd.SolrRDD

scala> val s = new SolrRDD("192.168.1.102:2181", "collection", sc)

error: bad symbolic reference. A signature in SolrRDD.class refers to term solr
in package org.apache which is not available.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when compiling SolrRDD.class.
<console>:28: error: bad symbolic reference. A signature in SolrRDD.class refers to term common
in value org.apache.solr which is not available.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when compiling SolrRDD.class.
         val s = new SolrRDD("192.168.1.102:2181", "collection", sc)

i've built the package from source using
mvn clean package -DskipTests
build process completed successfully and created jar file in target directory.

Basically i'm not a scala person, so is there any API's available in pyspark ? and what are the version compatibilities ?

Accessing collection fails with NullPointerException

Hi on the current master branch I am running into the following issue, which I did not experience in the 1.1.2 release.

[vagrant@test550-master spark-solr-master]$ PROJECT_HOME=`pwd` && spark-shell --jars $PROJECT_HOME/target/spark-solr-1.2.0-SNAPSHOT-shaded.jar --driver-class-path $PROJECT_HOME/target/spark-solr-1.2.0-SNAPSHOT-shaded.jar

scala> val poems = sqlContext.load("solr", Map("zkHost" -> "test550-master:2181/solr", "collection" -> "poems"))
warning: there were 1 deprecation warning(s); re-run with -deprecation for details
2016-01-05 14:56:36,987 [main] INFO  SolrZkClient  - Using default ZkCredentialsProvider
2016-01-05 14:56:37,032 [main] INFO  ConnectionManager  - Waiting for client to connect to ZooKeeper
2016-01-05 14:56:37,054 [zkCallback-2-thread-1] INFO  ConnectionManager  - Watcher shaded.apache.solr.common.cloud.ConnectionManager@7d401432 name:ZooKeeperConnection Watcher:test550-master:2181/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
2016-01-05 14:56:37,055 [main] INFO  ConnectionManager  - Client is connected to ZooKeeper
2016-01-05 14:56:37,055 [main] INFO  SolrZkClient  - Using default ZkACLProvider
2016-01-05 14:56:37,059 [main] INFO  ZkStateReader  - Updating cluster state from ZooKeeper...
2016-01-05 14:56:37,545 [main] WARN  HiveConf  - HiveConf of name hive.enable.spark.execution.engine does not exist
poems: org.apache.spark.sql.DataFrame = [id: string, text: string, _version_: bigint]

scala> poems.printSchema()
root
 |-- id: string (nullable = false)
 |-- text: string (nullable = false)
 |-- _version_: long (nullable = true)

scala> poems.show()
2016-01-05 14:59:25,856 [main] INFO  SolrRelation  - Building Solr scan using fields=[id, text, _version_]
2016-01-05 14:59:25,857 [main] INFO  SolrRelation  - Constructed SolrQuery: q=*%3A*&rows=1000&sort=id+asc&fl=id%2Ctext%2C_version_&collection=poems
2016-01-05 14:59:29,249 [task-result-getter-0] WARN  TaskSetManager  - Lost task 0.0 in stage 0.0 (TID 0, test550-master.fq.dn): java.lang.RuntimeException: shaded.apache.solr.client.solrj.SolrServerException: java.lang.NullPointerException
    at com.lucidworks.spark.query.StreamingResultsIterator.hasNext(StreamingResultsIterator.java:66)
    at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: shaded.apache.solr.client.solrj.SolrServerException: java.lang.NullPointerException
    at com.lucidworks.spark.SolrRDD.querySolr(SolrRDD.java:956)
    at com.lucidworks.spark.query.StreamingResultsIterator.fetchNextPage(StreamingResultsIterator.java:91)
    at com.lucidworks.spark.query.StreamingResultsIterator.hasNext(StreamingResultsIterator.java:61)
    ... 27 more
Caused by: java.lang.NullPointerException
    at com.lucidworks.spark.SolrRDD.applyFields(SolrRDD.java:622)
    at com.lucidworks.spark.SolrRDD.querySolr(SolrRDD.java:906)
    ... 29 more

trying with tempTable:

scala> poems.registerTempTable("poems")

scala> val p = sqlContext.sql("SELECT * from poems")
2016-01-05 15:02:01,588 [main] INFO  ParseDriver  - Parsing command: SELECT * from poems
2016-01-05 15:02:02,286 [main] INFO  ParseDriver  - Parse Completed
p: org.apache.spark.sql.DataFrame = [id: string, text: string, _version_: bigint]

scala> p.show()
2016-01-05 15:02:06,075 [main] INFO  SolrRelation  - Building Solr scan using fields=[id, text, _version_]
2016-01-05 15:02:06,075 [main] INFO  SolrRelation  - Constructed SolrQuery: q=*%3A*&rows=1000&sort=id+asc&fl=id%2Ctext%2C_version_&collection=poems
2016-01-05 15:02:09,242 [task-result-getter-0] WARN  TaskSetManager  - Lost task 0.0 in stage 0.0 (TID 0, test550-master.fq.dn): java.lang.RuntimeException: shaded.apache.solr.client.solrj.SolrServerException: java.lang.NullPointerException
    at com.lucidworks.spark.query.StreamingResultsIterator.hasNext(StreamingResultsIterator.java:66)
    at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: shaded.apache.solr.client.solrj.SolrServerException: java.lang.NullPointerException
    at com.lucidworks.spark.SolrRDD.querySolr(SolrRDD.java:956)
    at com.lucidworks.spark.query.StreamingResultsIterator.fetchNextPage(StreamingResultsIterator.java:91)
    at com.lucidworks.spark.query.StreamingResultsIterator.hasNext(StreamingResultsIterator.java:61)
    ... 27 more
Caused by: java.lang.NullPointerException
    at com.lucidworks.spark.SolrRDD.applyFields(SolrRDD.java:622)
    at com.lucidworks.spark.SolrRDD.querySolr(SolrRDD.java:906)
    ... 29 more

The same does work with the current 1.1.2 release:

[vagrant@test550-master spark-solr-1.1.2]$ PROJECT_HOME=`pwd` && spark-shell --jars $PROJECT_HOME/target/spark-solr-1.1.2-shaded.jar --driver-class-path $PROJECT_HOME/target/spark-solr-1.1.2-SNAPSHOT-shaded.jar

scala> val poems = sqlContext.load("solr", Map("zkHost" -> "test550-master:2181/solr", "collection" -> "poems"))
warning: there were 1 deprecation warning(s); re-run with -deprecation for details
poems: org.apache.spark.sql.DataFrame = [_version_: bigint, id: string, text: string]

scala> poems.show()
16/01/05 15:06:35 INFO SolrRelation: Building Solr scan using fields=[_version_, id, text]
16/01/05 15:06:35 INFO SolrRelation: Constructed SolrQuery: q=*%3A*&rows=1000&sort=id+asc&collection=poems&fl=_version_%2Cid%2Ctext
16/01/05 15:06:35 INFO SparkContext: Starting job: show at <console>:22
16/01/05 15:06:35 INFO DAGScheduler: Got job 1 (show at <console>:22) with 1 output partitions
16/01/05 15:06:35 INFO DAGScheduler: Final stage: ResultStage 1(show at <console>:22)
16/01/05 15:06:35 INFO DAGScheduler: Parents of final stage: List()
16/01/05 15:06:35 INFO DAGScheduler: Missing parents: List()
16/01/05 15:06:35 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[9] at show at <console>:22), which has no missing parents
16/01/05 15:06:35 INFO MemoryStore: ensureFreeSpace(3976) called with curMem=6354, maxMem=556038881
16/01/05 15:06:35 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.9 KB, free 530.3 MB)
16/01/05 15:06:35 INFO MemoryStore: ensureFreeSpace(2377) called with curMem=10330, maxMem=556038881
16/01/05 15:06:35 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KB, free 530.3 MB)
16/01/05 15:06:35 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.0.0.4:46263 (size: 2.3 KB, free: 530.3 MB)
16/01/05 15:06:35 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
16/01/05 15:06:35 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[9] at show at <console>:22)
16/01/05 15:06:35 INFO YarnScheduler: Adding task set 1.0 with 1 tasks
16/01/05 15:06:35 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, test550-master.fq.dn, partition 0,PROCESS_LOCAL, 2145 bytes)
16/01/05 15:06:36 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on test550-master.fq.dn:54886 (size: 2.3 KB, free: 530.3 MB)
16/01/05 15:06:38 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 2365 ms on test550-master.fq.dn (1/1)
16/01/05 15:06:38 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool
16/01/05 15:06:38 INFO DAGScheduler: ResultStage 1 (show at <console>:22) finished in 2.366 s
16/01/05 15:06:38 INFO DAGScheduler: Job 1 finished: show at <console>:22, took 2.387781 s
+-------------------+---+--------------------+
|          _version_| id|                text|
+-------------------+---+--------------------+
|1520710855906295808|  1|Mary had a little...|
|1520710855977598976|  2|The quick brown f...|
+-------------------+---+--------------------+


scala>

how about performance of count()?

solr will quickly returns numFound on resposne to a solr query
but i know Spark SQL will re-filter all RDD rows returned by your buildScan(),
so,
do you have any ideas on improve the count()'s performance?

No longer able to run SolrRelationTest in master

with master at commit 02ee91f i can no longer successfully run unit tests. the issue seems to be with SolrRelationTest.

i run:

$ export MAVEN_OPTS="-Xmx3g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
$ mvn clean test -Dtest=com.lucidworks.spark.SolrRelationTest

i get all OOM and hanging jvm processes. the commit where this starts is 82b7021

if i remove the unit tests related to 82b7021 in SolrRelationTest the OOMs are gone, but the test fails:

2015-10-09 12:24:20,080 [qtp428279569-24] ERROR SolrCore  - org.apache.solr.common.SolrException: ERROR: [doc=1] Error adding field 'field5_ii'='scala.collection.mutable.WrappedArray$ofRef:WrappedArray(1000)' msg=For input string: "scala.collection.mutable.WrappedArray$ofRef:WrappedArray(1000)"
    at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:176)
    at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
    at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
    at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
    at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
    at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
    at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
    at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
    at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
    at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:101)
    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:179)
    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:135)
    at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:241)
    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
    at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:206)
    at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:126)
    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:186)
    at org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:111)
    at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
    at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
    at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
    at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
    at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
    at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
    at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:105)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
    at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
    at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:300)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
    at org.eclipse.jetty.server.Server.handle(Server.java:497)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NumberFormatException: For input string: "scala.collection.mutable.WrappedArray$ofRef:WrappedArray(1000)"
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
    at java.lang.Integer.parseInt(Integer.java:492)
    at java.lang.Integer.parseInt(Integer.java:527)
    at org.apache.solr.schema.TrieField.createField(TrieField.java:631)
    at org.apache.solr.schema.TrieField.createFields(TrieField.java:694)
    at org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
    at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
    ... 47 more

Filtering on an indexed text field doesnt honour quotations in phrase queries

When the filter query is built for a text field it strips all the quotations from the string. So im my solr query i would run something like this,
q=:*:&fq=body:"William Doran" and this will search for a phrase query and for example i get back 200 documents

however, when i load up my solr index as a DF or RDD, and issue a similar query
df.where("body = "William Doran"").select("id").take(1) this gets translated into
q=:*:&fq=body:William Doran without the quotation marks and causes a term query with the two terms which attempts to return 10million results......BOOM!
i have tried the same with spark SQL but the same behaviour occurs. any ideas on where to look in the code to figure out where the fq filters are being parsed and converted.

Id also be keen to know why you can only provide a standard query param at config time instead of only using (fq), It seems as tho this would be useful to be able to provide a q param when issuing a select or filter command in a dataset/rdd

Solr date format problems

Hi All,

When running a query in spark for a particular date via spark-solr plugin I get solr exception "Invalid Date String". It looks like spark-solr plugin is converting java.sql.Timestamp to format "yyyy-MM-dd hh:MM:ss" instead of sorl required UTC format: "yyyy-MM-dd'T'hh:MM:ss'Z'". I've tracked that the conversion takes place here: SolrRelation.fq( Filter f ) with direct call to String.valueOf( eq.value ) - lines circa 254 in SolrRelation.java. I'm using branch spark1_5_x.

Let me know if this is a bug or I'm not using it correctly. My query is:

sqlContext.sql( "select count(*) from some_table where Date_='2015-11-30 00:00:00.0'" ).show()

Thanks,
Marcin

2016-11-26 14:20:16,050 [main] WARN SolrQuerySupport - Can't get uniqueKey for collection1 due to: java.net.ConnectException: Connection refused: connect

Hi, I am using following piece of code to write to solr using dataframe api as follow.

val writeoptions = scala.collection.immutable.HashMap( "zkhost" -> "192.168.23.109:2181", "collection" -> "collection1", "soft_commit_secs" -> "10") df.write.format("solr").options(writeoptions).mode(SaveMode.Overwrite).save
But it returns following error.

2016-11-26 14:20:16,050 [main] WARN SolrQuerySupport - Can't get uniqueKey for collection1 due to: java.net.ConnectException: Connection refused: connect 2016-11-26 14:20:20,108 [main] ERROR SolrQuerySupport - Can't get field type metadata from Solr url http://127.0.1.1:8983/solr/collection1/schema/fieldtypes Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Connection refused: connect at com.lucidworks.spark.util.SolrQuerySupport$.getFieldTypeToClassMap(SolrQuerySupport.scala:506) at com.lucidworks.spark.util.SolrQuerySupport$.getFieldTypes(SolrQuerySupport.scala:258) at com.lucidworks.spark.util.SolrQuerySupport$.getFieldTypes(SolrQuerySupport.scala:254) at com.lucidworks.spark.util.SolrRelationUtil$.getBaseSchema(SolrRelationUtil.scala:35) at com.lucidworks.spark.SolrRelation.<init>(SolrRelation.scala:83) at solr.DefaultSource.createRelation(DefaultSource.scala:26)
Why it is connecting to 127.0.0.1? Do I need to create collection before writing data into it?

collection not set in solrRDD

i execute the code that get the solr into RDD
JavaRDD solrJavaRDD = solrRDD.query(jsc, solrQuery, useDeepPagingCursor);

i got an exception:
solrrdd org.apache.solr.client.solrj.SolrServerException: No collection param specified on request and no default collection has been set.

i fount that the solrRDD inside of function get set the collection by
params.set("collection", collection);

instead of
cloudSolrServer.setDefaultCollection(collection);

i changed it in mycode and it fixed the issue.

Is zookeeper mandatory

I wonder if I have to use zookeeper. Can SolrRDD provide a way without the zookeeper parameter

Spark 1.5.x compatibility

using master branch (c9911ed) i simply changed spark.version in pom.xml to 1.5.0

i see one unit test fail:

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.365 sec <<< FAILURE!
testFilterSupport(com.lucidworks.spark.SolrRelationTest)  Time elapsed: 5.823 sec  <<< ERROR!
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 6, localhost): java.lang.ClassCastException: java.util.Date cannot be cast to java.sql.Timestamp
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$TimestampConverter$.toCatalystImpl(CatalystTypeConverters.scala:308)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:396)
    at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:63)
    at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:60)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1910)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:904)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:177)
    at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
    at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904)
    at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385)
    at com.lucidworks.spark.SolrRelationTest.testFilterSupport(SolrRelationTest.java:50)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
    at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
    at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
    at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
    at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
    at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
    at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
    at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: java.lang.ClassCastException: java.util.Date cannot be cast to java.sql.Timestamp
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$TimestampConverter$.toCatalystImpl(CatalystTypeConverters.scala:308)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:396)
    at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:63)
    at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$2.apply(ExistingRDD.scala:60)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

SolrRelationUtil error in Zeppelin %sql

Hello,
I am using Zeppelin (SPARK) and I am trying to run the following, but getting an error.
Thanks,
x10ba

Setup

  1. SPARK -> 1.6.0.2.4
  2. HDP -> 2.4.0.0

Code // load dependencies
%dep
z.reset()
z.load("commons-io:commons-io:2.5")

z.load("org.apache.spark:spark-core_2.10:1.6.0")
z.load("org.apache.spark:spark-streaming_2.10:1.6.0")
z.load("org.apache.spark:spark-streaming-kafka_2.10:1.6.0")

z.load("com.lucidworks.spark:spark-solr:2.0.4")

imports etc

//java and scala
import java.net._
import scala.collection.immutable._

//commons
import org.apache.commons.io.IOUtils

//spark
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._
import org.apache.spark.rdd.RDD
import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka._
import org.apache.spark.sql.{Row, SQLContext}
import org.apache.spark.rdd.PairRDDFunctions
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.types.StringType
import org.apache.spark.sql.{SQLContext, DataFrame}
import org.apache.spark.api.java.JavaRDD

// implicits works with rdd
import sqlContext.implicits._

// SOLR
import com.lucidworks.spark.rdd.SolrRDD
import com.lucidworks.spark.util.{SolrRelationUtil}

Code // register DF as TempTable and query
val df = sqlContext.read.format("solr").options().load
df.registerTempTable("tempTable")

// Zeppelin Paragraph
%sql
select * from tempTable -> this results in the following error

ERROR
java.lang.ClassNotFoundException: com.lucidworks.spark.util.SolrRelationUtil$$anonfun$1$$anonfun$apply$2
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.InnerClosureFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:435)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:84)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:187)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:324)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:323)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.map(RDD.scala:323)
at com.lucidworks.spark.util.SolrRelationUtil$.toRows(SolrRelationUtil.scala:264)
at com.lucidworks.spark.SolrRelation.buildScan(SolrRelation.scala:193)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$3.apply(DataSourceStrategy.scala:57)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$3.apply(DataSourceStrategy.scala:57)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:274)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:273)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProjectRaw(DataSourceStrategy.scala:352)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProject(DataSourceStrategy.scala:269)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:53)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:349)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:47)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:45)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:52)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:52)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2134)
at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1413)
at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:301)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:144)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:295)
at org.apache.zeppelin.scheduler.Job.run(Job.java:171)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Retrieving schema details fails

Hello,

I noticed that the code gets "HTTP-414 Request-URI Too long" error when trying to get schema details if the index has lots of fields. Is there any workaround for this issue?

2016-03-21 11:49:37 ERROR SolrQuerySupport:385 - Can't get field metadata from Solr using request 'http://myhost:8983/solr/collection/schema/fields?showDefaults=true&includeDynamic=true&fl=xxx,yyy,zzz,<MORE AND MORE FIELDS>,&wt=json] failed due to: HTTP/1.1 414 Request-URI Too Long: 
    at com.lucidworks.spark.util.SolrJsonSupport$.doJsonRequest(SolrJsonSupport.scala:90)
    at com.lucidworks.spark.util.SolrJsonSupport$.getJson(SolrJsonSupport.scala:71)
    at com.lucidworks.spark.util.SolrJsonSupport$.getJson(SolrJsonSupport.scala:34)
    at com.lucidworks.spark.util.SolrQuerySupport$.getFieldDefinitionsFromSchema(SolrQuerySupport.scala:365)
    at com.lucidworks.spark.util.SolrQuerySupport$.getFieldTypes(SolrQuerySupport.scala:259)
    at com.lucidworks.spark.util.SolrQuerySupport$.getFieldTypes(SolrQuerySupport.scala:253)
    at com.lucidworks.spark.util.SolrSchemaUtil$.getBaseSchema(SolrSchemaUtil.scala:22)
    at com.lucidworks.spark.SolrRelation.<init>(SolrRelation.scala:66)
    at com.lucidworks.spark.SolrRelation.<init>(SolrRelation.scala:37)
    at solr.DefaultSource.createRelation(DefaultSource.scala:12)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
    at example.SolrExample$.main(SolrExample.scala:16)
    at example.SolrExample.main(SolrExample.scala)

Building fat jar in Scala with SBT

I want to add this project to a Scala project and build using SBT. I'm having difficulty building assembly jar. I get a lot of "deduplicate: different file contents found in the following:" errors

I'm building this project using Maven and added local Maven repo as a resolver in SBT.
I'm able to build fat jar in Java project with Maven. I guess I'm having issues because SBT doesn't recognize maven shade.

I've set "mergeStrategy" to discard a few config files but going down this path for all dependencies doesn't sound like a good option.

Has anyone faced similar issues and was able to solve it?

Spark 2.0 support

I see that there is already a branch to support Spark 2.0. Any idea when you guys are planning to release it?

Launching Spark Shell with Shaded jar not working with ADD_JAR on Spark 1.5

Hi,
if I try to launch as suggested in the Readme on the current master, spark cannot find the solr datasource:

[vagrant@test550-master spark-solr-1.1.2]$ PROJECT_HOME=pwd && ADD_JARS=$PROJECT_HOME/target/spark-solr-1.0-SNAPSHOT-shaded.jar spark-shell
...
scala> val poems = sqlContext.load("solr", Map("zkHost" -> "test550-master:2181/solr", "collection" -> "poems"))
warning: there were 1 deprecation warning(s); re-run with -deprecation for details
java.lang.ClassNotFoundException: Failed to load class for data source: solr.
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:67)

Invoking spark-shell with the following options fixes this for me:

PROJECT_HOME=pwd && spark-shell --jars $PROJECT_HOME/target/spark-solr-1.2.0-SNAPSHOT-shaded.jar --driver-class-path $PROJECT_HOME/target/spark-solr-1.2.0-SNAPSHOT-shaded.jar

scala> val poems = sqlContext.load("solr", Map("zkHost" -> "test550-master:2181/solr", "collection" -> "poems"))
warning: there were 1 deprecation warning(s); re-run with -deprecation for details
2016-01-05 14:56:36,987 [main] INFO SolrZkClient - Using default ZkCredentialsProvider
2016-01-05 14:56:37,032 [main] INFO ConnectionManager - Waiting for client to connect to ZooKeeper
2016-01-05 14:56:37,054 [zkCallback-2-thread-1] INFO ConnectionManager - Watcher shaded.apache.solr.common.cloud.ConnectionManager@7d401432 name:ZooKeeperConnection Watcher:test550-master:2181/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
2016-01-05 14:56:37,055 [main] INFO ConnectionManager - Client is connected to ZooKeeper
2016-01-05 14:56:37,055 [main] INFO SolrZkClient - Using default ZkACLProvider
2016-01-05 14:56:37,059 [main] INFO ZkStateReader - Updating cluster state from ZooKeeper...
2016-01-05 14:56:37,545 [main] WARN HiveConf - HiveConf of name hive.enable.spark.execution.engine does not exist
poems: org.apache.spark.sql.DataFrame = [id: string, text: string, version: bigint]

scala>

The ADD_JAR option does however work with the 1.1.2 release.

Union on DataFrames just returns the argument?

On spark-solr created DataFrames I observe this strange behaviour where a.union(b) just contains b.
Not completely sure where to put the blame, but I wasn't able to reproduce it without solr:

scala> val df = spark.sqlContext.read.format("solr").options(options).load;
val dfs = df.select($"process".as[String]).distinct.collect.map(p => df.filter($"process" === p));
val emptyDF = spark.sqlContext.createDataFrame(sc.emptyRDD[Row], df.schema);
val unionDf = dfs.fold(emptyDF)((a:DataFrame, b:DataFrame) => a.union(b));
(dfs.map(_.count), unionDf.count)

df: org.apache.spark.sql.DataFrame = [process: string, host: string ... 8 more fields]
dfs: Array[org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]] = Array([process: string, host: string ... 8 more fields], [process: string, host: string ... 8 more fields], [process: string, host: string ... 8 more fields])
emptyDF: org.apache.spark.sql.DataFrame = [process: string, host: string ... 8 more fields]
unionDf: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [process: string, host: string ... 8 more fields]
res40: (Array[Long], Long) = (Array(12793, 1375, 3125),3125)

scala> unionDf.explain
== Physical Plan ==
Union
:- Scan ExistingRDD[process#1342,host#1343,data#1344,id#1345,measurement#1346,metric#1347,end#1348L,_version_#1349L,start#1350L,group#1351]
:- *Filter (isnotnull(process#1302) && (process#1302 = jenkins-jolokia))
:  +- *Scan com.lucidworks.spark.SolrRelation@3552f9d8 [process#1302,host#1303,data#1304,id#1305,measurement#1306,metric#1307,end#1308L,_version_#1309L,start#1310L,group#1311] PushedFilters: [IsNotNull(process), EqualTo(process,jenkins-jolokia)]
:- *Filter (isnotnull(process#1302) && (process#1302 = jenkins))
:  +- *Scan com.lucidworks.spark.SolrRelation@3552f9d8 [process#1302,host#1303,data#1304,id#1305,measurement#1306,metric#1307,end#1308L,_version_#1309L,start#1310L,group#1311] PushedFilters: [IsNotNull(process), EqualTo(process,jenkins)]
+- *Filter (isnotnull(process#1302) && (process#1302 = global))
   +- *Scan com.lucidworks.spark.SolrRelation@3552f9d8 [process#1302,host#1303,data#1304,id#1305,measurement#1306,metric#1307,end#1308L,_version_#1309L,start#1310L,group#1311] PushedFilters: [IsNotNull(process), EqualTo(process,global)]

Without solr involved it looks good:

scala> val df = spark.sqlContext.createDataset(for (i <- 1 to 1000; j <- 1 to 3) yield (i,j)).toDF;
val dfs = df.select($"_2".as[Int]).distinct.collect.map(p => df.filter($"_2" === p));
val emptyDF = spark.sqlContext.createDataFrame(sc.emptyRDD[Row], df.schema);
val unionDf = dfs.fold(emptyDF)((a:DataFrame, b:DataFrame) => a.union(b));
(dfs.map(_.count), unionDf.count)

df: org.apache.spark.sql.DataFrame = [_1: int, _2: int]
dfs: Array[org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]] = Array([_1: int, _2: int], [_1: int, _2: int], [_1: int, _2: int])
emptyDF: org.apache.spark.sql.DataFrame = [_1: int, _2: int]
unionDf: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [_1: int, _2: int]
res38: (Array[Long], Long) = (Array(1000, 1000, 1000),3000)

scala> unionDf.explain
== Physical Plan ==
Union
:- Scan ExistingRDD[_1#1257,_2#1258]
:- *Filter (_2#1241 = 1)
:  +- LocalTableScan [_1#1240, _2#1241]
:- *Filter (_2#1241 = 3)
:  +- LocalTableScan [_1#1240, _2#1241]
+- *Filter (_2#1241 = 2)
   +- LocalTableScan [_1#1240, _2#1241]

join and left join

SPARK-SQL: tableA a left join tableB b where a.id = 123 and b.id=123
it's very slowly, the "where" is not to do in solr.

but I do this SQL:
tableA a join tableB b where a.id = 123 and b.id=123
it's quick

I guess left join need to update

Not able to index the content of a dataframe(generated from a json file). Method Not Found exception

Hi,

I am trying to index the content of a dataframe (genearte from a simple json file) in solr using the following command:

val df = sqlContext.read.json("emp.json"); val solrOpts = Map("zkhost" -> "<ip>:<port>/solr", "collection" -> "sample_test2") df.write.format("solr").options(solrOpts).mode(org.apache.spark.sql.SaveMode.Append).save()

I am getting "Method Not Found" exception.
It says

Request to collection mycollection failed due to (405) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://:8983/solr/mycollection: Expected mime type application/octet-stream but got text/html.

Can I provide mime type in the *_solrOpts *_parameter?

Also, when I try to add a document from Apache Solr admin portal, then its getting indexed properly.

The schema.xml file has only three fields:

  1. empid (unique)
  2. employeename
  3. country

Sample json : {"empid":"102","employeename":"Peter","country":"US"}

Thanks

Cannot set the number of rows to None

The SolrRDD accepts a number of rows which is an Option, but setting it to None will cause an Exception from this line because it is calling None.get. That line is actually meaningless as it is right now, and I think the condition should be modified. I think it makes more sense to check if rows.isDefined and set the number of rows in the query accordingly.

can not use it with solr5.0.0 in cluster mode

Out of box pom.xml which uses solrj and solr-core 4.10.3 gives
015-03-28 22:43:46,650 [Executor task launch worker-0] ERROR Executor - Exception in task 0.0 in stage 6.0 (TID 6)
org.apache.solr.common.SolrException: Could not find collection :

If i compile this with solrJ5.0.0 and solr-core 5.0.0 i get

2015-03-28 23:04:09,014 [Executor task launch worker-1] ERROR Executor - Exception in task 1.0 in stage 8.0 (TID 16)
java.lang.Error: Unresolved compilation problem:
Type mismatch: cannot convert from CloudSolrServer to SolrServer

at com.lucidworks.spark.SolrSupport$4.call(SolrSupport.java:162)

org.apache.solr.common.SolrException: Could not find collection

Hi,
I am trying to run this plugin with sorl5.0.0 running in cloud mode and i get following error.

2015-03-26 13:50:05,859 [Executor task launch worker-0] INFO ConnectionManager - Waiting for client to connect to ZooKeeper
2015-03-26 13:50:05,867 [Executor task launch worker-0-SendThread(localhost:9983)] WARN ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2015-03-26 13:50:05,979 [Executor task launch worker-0-EventThread] INFO ConnectionManager - Watcher org.apache.solr.common.cloud.ConnectionManager@54b888c2 name:ZooKeeperConnection Watcher:localhost:9983 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
2015-03-26 13:50:05,979 [Executor task launch worker-0] INFO ConnectionManager - Client is connected to ZooKeeper
2015-03-26 13:50:05,984 [Executor task launch worker-0] INFO ZkStateReader - Updating cluster state from ZooKeeper...
2015-03-26 13:50:06,005 [Executor task launch worker-0] ERROR SolrSupport - Send batch to collection testcol failed due to: org.apache.solr.common.SolrException: Could not find collection : testcol
org.apache.solr.common.SolrException: Could not find collection : testcol
at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:162)
at org.apache.solr.client.solrj.impl.CloudSolrServer.directUpdate(CloudSolrServer.java:305)
at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:533)
at com.lucidworks.spark.SolrSupport.sendBatchToSolr(SolrSupport.java:188)
at com.lucidworks.spark.SolrSupport$4.call(SolrSupport.java:170)
at com.lucidworks.spark.SolrSupport$4.call(SolrSupport.java:160)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2015-03-26 13:50:06,005 [Executor task launch worker-0] ERROR Executor - Exception in task 0.0 in stage 6.0 (TID 6)
org.apache.solr.common.SolrException: Could not find collection : testcol
at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:162)
at org.apache.solr.client.solrj.impl.CloudSolrServer.directUpdate(CloudSolrServer.java:305)
at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:533)
at com.lucidworks.spark.SolrSupport.sendBatchToSolr(SolrSupport.java:188)
at com.lucidworks.spark.SolrSupport$4.call(SolrSupport.java:170)
at com.lucidworks.spark.SolrSupport$4.call(SolrSupport.java:160)

Invocation of AnalysisException uses null in 1.1.2 release

When constructing a solr query that does not yield any results I run into the following issue with the 1.1.2 shaded jar. I am aware that this section of the code has seen massive changes on the current master, so I don’t know if anyone would consider fixing my issue for the 1.1.2 jar.

scala> sqlContext.read.format("solr").option("zkHost", "test550-master:2181/solr").option("collection", "poems2").option("query","text:\"Mary lamb\"~2").load().show()
16/01/05 15:45:58 INFO SolrRelation: before: solrRDD = new SolrRDD(zkHost, collection);
16/01/05 15:45:58 INFO SolrRelation: before: solrQuery = SolrRDD.toQuery(query);
16/01/05 15:45:58 INFO SolrRelation: before: if (fieldList != null) {
16/01/05 15:45:58 INFO SolrRelation: before: if (dataFrame != null) {
16/01/05 15:45:58 INFO SolrRelation: before: schema = solrRDD.getQuerySchema(solrQuery);
java.lang.NullPointerException
    at org.apache.spark.sql.AnalysisException.getMessage(AnalysisException.scala:40)
    at java.lang.Throwable.getLocalizedMessage(Throwable.java:391)
    at java.lang.Throwable.toString(Throwable.java:480)
    at java.lang.Throwable.<init>(Throwable.java:311)
    at java.lang.Exception.<init>(Exception.java:102)
    at java.lang.RuntimeException.<init>(RuntimeException.java:96)
    at solr.DefaultSource.createRelation(DefaultSource.java:26)
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:125)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)

This is due to code in SolrRDD.java:getQuerySchema where a Spark/Scala AnalysisException is thrown with null parameters. null does not expand to None in Scala, which is expected in the getOrElse method in AnalysisException.scala. For calling Scala Exceptions from Java, one can use scala.Option.apply(null) in Java, which is a bit hacky. But even if we do this, the code will give an ambiguous error indication, which I thrive to fix locally with the patch below. As initially mentioned, I am not sure if anyone would fix this for the current stable release at this point, but reporting nevertheless.

On 2935b32:

@@ -541,18 +541,24 @@ public class SolrRDD implements Serializable {
       probeForFieldsQuery.set("fl", "*");
       probeForFieldsQuery.setStart(0);
       probeForFieldsQuery.setRows(10);
       QueryResponse probeForFieldsResp = solrServer.query(probeForFieldsQuery);
       SolrDocumentList hits = probeForFieldsResp.getResults();
+      if (hits.getNumFound() < 1) {
+          scala.Option<Object> x = scala.Option.apply(null);
+          throw new AnalysisException("Query ("+query+") does not return any documents from Solr!", x, x);
+      }
       Set<String> fieldSet = new TreeSet<String>();
       for (SolrDocument hit : hits)
         fieldSet.addAll(hit.getFieldNames());
       fields = fieldSet.toArray(new String[0]);
     }

-    if (fields == null || fields.length == 0)
-      throw new AnalysisException("Query ("+query+") does not specify any fields needed to build a schema!", null, null);
+    if (fields == null || fields.length == 0) {
+        scala.Option<Object> x = scala.Option.apply(null);
+        throw new AnalysisException("Query ("+query+") does not specify any fields needed to build a schema!", x, x);
+    }

     Set<String> liveNodes = solrServer.getZkStateReader().getClusterState().getLiveNodes();
     if (liveNodes.isEmpty())
       throw new RuntimeException("No live nodes found for cluster: "+zkHost);
     String solrBaseUrl = solrServer.getZkStateReader().getBaseUrlForNodeName(liveNodes.iterator().next());

unresolved dependency: com.lucidworks.solr#spark-solr;2.0.1: not found

$SPARK_HOME/bin/spark-shell --packages "com.lucidworks.solr:spark-solr:2.0.1"

cannot get the correct jar
https://repo1.maven.org/maven2/com/lucidworks/spark/spark-solr/2.0.1/spark-solr-2.0.1-shaded.jar

instead tries to get :

==== central: tried

  https://repo1.maven.org/maven2/com/lucidworks/solr/spark-solr/2.0.1/spark-solr-2.0.1.pom

  -- artifact com.lucidworks.solr#spark-solr;2.0.1!spark-solr.jar:

  https://repo1.maven.org/maven2/com/lucidworks/solr/spark-solr/2.0.1/spark-solr-2.0.1.jar

==== spark-packages: tried

  http://dl.bintray.com/spark-packages/maven/com/lucidworks/solr/spark-solr/2.0.1/spark-solr-2.0.1.pom

  -- artifact com.lucidworks.solr#spark-solr;2.0.1!spark-solr.jar:

  http://dl.bintray.com/spark-packages/maven/com/lucidworks/solr/spark-solr/2.0.1/spark-solr-2.0.1.jar

    ::::::::::::::::::::::::::::::::::::::::::::::

    ::          UNRESOLVED DEPENDENCIES         ::

    ::::::::::::::::::::::::::::::::::::::::::::::

    :: com.lucidworks.solr#spark-solr;2.0.1: not found

    ::::::::::::::::::::::::::::::::::::::::::::::

:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: com.lucidworks.solr#spark-solr;2.0.1: not found]
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1011)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:286)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:153)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

NoSuchMethodError: scala.reflect.NameTransformer$.LOCAL_SUFFIX_STRING()Ljava/lang/String;

Using Version 2.0.1, version 1.6.1 of Spark, Java Version 1.8 and Solr 5.3.1, the following Java code fails immediately with tthe exception above. The query is a SolrQuery object. I can post more if needed, but is this a spark version mismatch of some kind?

JavaPairRDD<String, ArrayRIV> drRdd = SolrJavaRDD.get(props.zkHosts(), props.collection(), jsc.sc())
.queryShards(query)

Reading back arrays in a DataFrame no longer work in 2.0.0

Hello,

I was using spark-solr 1.1.2 in java fairly happily up until now. Then I saw 2.0.0 is out and therefore I tried it out and it broke a lot of our unit tests.
There are 2 issues:

  • For int and float, they dont seem to work anymore, when I write to Solr as int and float they are read back as long and double. Is there a way to read back int and float? This is a minor and I can probably live with.
  • It seems that when I want to read back an array (of any types) as a column in a DataFrame in Spark from Solr, I can only read back the first element with the 2.0.0 library, and it is no longer an array (but just as the underlying type of the array).

e.g. If I have in Solr:
someArray : ["foo", "bar"]

It used to be able to create:
|-- someArray : array(string) (nullable = true)
as a column in a dataframe in spark when read back, but now I get:
|-- someArray : string (nullable = true)
and the value is just:
"foo"

Is this a bug or there are some parameters I am missing?

spark-solr version 1.1.2

Hello,

I am using spark-solr jar but an older version 1.1.2. I am using the following options for Querying Solr.

options = Map(
"collection" -> collection,
"zkhost" -> zkHost,
"request_handler" -> "/export",
"flatten_multivalued" -> "false"
)
sqlConT.read.format("solr").options(options).load()

I am just wondering what API is being used underneath? Is it the streaming API (with /export handler) or something else?

Thanks.

too many "OR" not support!

hello
I run a SQL like this:"select * from table where id='123' or id ='345' or id = '456' "
throw exception:
Error: java.lang.IllegalArgumentException: Filters of type 'Or(EqualTo(order_id,99001331547aa1ad01547aa1e89b0042),EqualTo(order_id,99001331547b710f01547b71bc060066)) (org.apache.spark.sql.sources.Or)' not supported! (state=,code=0)

but just two "OR" is OK! can you help me?

spark rdd exception

When I use :
SolrQuery solrQuery = new solrQuery("*:*"); SolrJavaRDD solrRDD = SolrJavaRDD.get(zkHost, collection, jsc.sc()); JavaRDD<SolrDocument> resultsRDD = solrRDD.queryShards(solrQuery); resultsRDD.count()
exception:
cursorMark=AoE/ATk5OTI5MDAzNTdmNjU4YWUwMTU3ZjY1YTA4MDIwMDA0, read 20199 of 20200 so far from http://10.1.1.1:8080/solr/order_shard1_replica1. Most likely this means your query's sort criteria is not generating stable results for computing deep-paging cursors, has the index changed? If so, try using a filter criteria the bounds the results to non-changing data.

But the all data count is 3W.

when I use

df = spark.read.format("solr") .options(options) .load df.count()
result is OK.

and When I use first one,solrQuery.setRow(10000)
result is OK.
Why?
I don't know how to optimize~

Can you help me?

My email: [email protected]

{!terms f=order_id}123,123 not support!

dear :
I use query like this :
JavaRDD<SolrDocument> resultsRDD = solrRDD.query("{!terms f=order_id}0000133152f850840152f872c57d00bc");

 but the resultsRDD.count()=0;

if I use JavaRDD<SolrDocument> resultsRDD = solrRDD.query("order_id:0000133152f850840152f872c57d00bc");

resultsRDD.count() > 0

The util not do that ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.