Giter Site home page Giter Site logo

saurfang / spark-sas7bdat Goto Github PK

View Code? Open in Web Editor NEW
84.0 12.0 40.0 54.7 MB

Splittable SAS (.sas7bdat) Input Format for Hadoop and Spark SQL

Home Page: http://spark-packages.org/package/saurfang/spark-sas7bdat

License: Apache License 2.0

Scala 100.00%

spark-sas7bdat's Introduction

Spark SAS Data Source (sas7bdat)

A library for reading SAS data (.sas7bdat) with Spark.

Build Status Join the chat at https://gitter.im/saurfang/spark-sas7bdat

Requirements:

Download:

The latest jar can be downloaded from spark-packages.

Version Scala Version Spark Version
3.0.0-s_2.11 2.11.x 2.4.x
3.0.0-s_2.12 2.12.x 3.0.x

Features:

  • This package allows reading SAS files from local and distributed filesystems, into Spark DataFrames.
  • Schema is automatically inferred from metadata embedded in the SAS file. (Behaviour can be customised, see parameters below)
  • The SAS format is splittable when not file-system compressed, thus we are able to convert a 200GB (1.5Bn rows) .sas7bdat file to .csv files using 2000 executors in under 2 minutes.
  • This library uses parso for parsing as it is the only public available parser that handles both forms of SAS compression (CHAR and BINARY).

NOTE: this package does not support writing sas7bdat files

Docs:

Parameters:

  • extractLabel (Default: false)
    • Boolean: extract column labels as column comments for Parquet/Hive
  • forceLowercaseNames (Default: false)
    • Boolean: force column names to lower case
  • inferDecimal (Default: false)
    • Boolean: infer numeric columns with format width >0 and format precision >0, as Decimal(Width, Precision)
  • inferDecimalScale (Default: each column's format width)
    • Int: scale of inferred decimals
  • inferFloat (Default: false)
    • Boolean: infer numeric columns with <=4 bytes, as Float
  • inferInt (Default: false)
    • Boolean: infer numeric columns with <=4 bytes, format width >0 and format precision =0, as Int
  • inferLong (Default: false)
    • Boolean: infer numeric columns with <=8 bytes, format width >0 and format precision =0, as Long
  • inferShort (Default: false)
    • Boolean: infer numeric columns with <=2 bytes, format width >0 and format precision =0, as Short
  • metadataTimeout (Default: 60)
    • Int: number of seconds to allow reading of file metadata (stops corrupt files hanging)
  • minSplitSize (Default: mapred.min.split.size)
    • Long: minimum byte length of input splits (splits are always at least 1MB, to ensure correct reads)
  • maxSplitSize (Default: mapred.max.split.size)
    • Long: maximum byte length of input splits, (can be decreased to force higher parallelism)

NOTE:

  • the order of precedence for numeric type inference is: Long -> Int -> Short -> Decimal -> Float -> Double
  • sas doesn’t have a concept of Long/Int/Short, instead people typically use column formatters with 0 precision

Scala API

val df = {
  spark.read
    .format("com.github.saurfang.sas.spark")
    .option("forceLowercaseNames", true)
    .option("inferLong", true)
    .load("cars.sas7bdat")
}
df.write.format("csv").option("header", "true").save("newcars.csv")

You can also use the implicit readers:

import com.github.saurfang.sas.spark._

// DataFrameReader
val df = spark.read.sas("cars.sas7bdat")
df.write.format("csv").option("header", "true").save("newcars.csv")

// SQLContext
val df2 = sqlContext.sasFile("cars.sas7bdat")
df2.write.format("csv").option("header", "true").save("newcars.csv")

(Note: you cannot use parameters like inferLong with the implicit readers.)

Python API

df = spark.read.format("com.github.saurfang.sas.spark").load("cars.sas7bdat", forceLowercaseNames=True, inferLong=True)
df.write.csv("newcars.csv", header=True)

R API

df <- read.df("cars.sas7bdat", source = "com.github.saurfang.sas.spark", forceLowercaseNames = TRUE, inferLong = TRUE)
write.df(df, path = "newcars.csv", source = "csv", header = TRUE)

SQL API

SAS data can be queried in pure SQL by registering the data as a (temporary) table.

CREATE TEMPORARY VIEW cars
USING com.github.saurfang.sas.spark
OPTIONS (path="cars.sas7bdat")

SAS Export Runner

We included a simple SasExport Spark program that converts .sas7bdat to .csv or .parquet files:

sbt "run input.sas7bdat output.csv"
sbt "run input.sas7bdat output.parquet"

To achieve more parallelism, use spark-submit script to run it on a Spark cluster. If you don't have a spark cluster, you can always run it in local mode and take advantage of multi-core.

Spark Shell

spark-shell --master local[4] --packages saurfang:spark-sas7bdat:3.0.0-s_2.12

Caveats

  1. spark-csv writes out null as "null" in csv text output. This means if you read it back for a string type, you might actually read "null" instead of null. The safest option is to export in parquet format where null is properly recorded. See databricks/spark-csv#147 for alternative solution.

Related Work

Acknowledgements

This project would not be possible without parso continued improvements and generous contributions from @mulya, @thesuperzapper, and many others. We are hornored to be a recipient of 2020 WiseWithData ELEVATE Awards and appreciate their generous donations.

spark-sas7bdat's People

Contributors

8bit-pixies avatar gitter-badger avatar mulya avatar rayz90 avatar saurfang avatar tagar avatar thadeusb avatar thesuperzapper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spark-sas7bdat's Issues

how to support GBK?

I did not find where can I set the encoding parameter, I have sas dataset with GBK encoding.

Cannot read file from s3 path

We are trying to read a file from a S3 path in AWS glue & looks like it's not able to read this. Can this read a file from S3 path?

Issue regarding reading of sas7bdat file and then converting it into csv file.

I am attempting to convert a .sas7bdat file into .csv using pyspark. I intended to use the spark-sas7bdat for creating a spark dataframe from the sas7bdat file and then using the package spark-csv to write and save it as a csv file.
When i use the command:
[df = sqlContext.read.format("com.github.saurfang.sas.spark").load("filename.sas7bdat")] ..no error is thrown and the command runs very quickly, but after that when i try to run df.count() method on it, it throws a big error in a lot of lines, something like SasFileParser error, and also when i try to save it as a csv file using the command: [df.write.csv('filename.csv')] or the command [df.write.format('com.databricks.spark.csv').save('mycsv.csv')], the task does not completes and is stuck without giving any output or any error. The process gets stuck for hours even for very small files like 30MB. I intend to use it for larger files but first tried to test it for small files. The image shows the resources issue which i overcame using larger nodes. But even then it stucks. Can someone please tell what can be the possible reason and the solution for this.
capture

Cannot read SAS File

Hello,
I have a compressed SAS file with a few hundred columns, around 3.000.000 observations and file size 600MB. Reading the file ends in an error (appended).
Please ask if I can provide futher information.
Any help very appreciated.

Thanks in advance for help, Thomas

Expand to see **Stacktrace**
scala> <some object>.write.format("orc").saveAsTable("<some name>")
[Stage 3:>                                                          (0 + 0) / 5]18/02/26 15:44:38 ERROR Utils: Aborting task
java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:27)
        at com.github.saurfang.sas.parso.SasFileParserWrapper.readNext(ParsoWrapper.scala:80)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:125)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:124)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:137)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:33)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.rangeCheck(ArrayList.java:653)
        at java.util.ArrayList.get(ArrayList.java:429)
        at com.epam.parso.impl.SasFileParser.readNext(SasFileParser.java:493)
        ... 31 more
18/02/26 15:44:38 WARN FileOutputCommitter: Could not delete hdfs://lde0162t.de.top.com:8020/apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000002_0
18/02/26 15:44:38 ERROR FileFormatWriter: Job job_20180226154438_0003 aborted.
18/02/26 15:44:38 ERROR Executor: Exception in task 2.0 in stage 3.0 (TID 11)
org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:27)
        at com.github.saurfang.sas.parso.SasFileParserWrapper.readNext(ParsoWrapper.scala:80)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:125)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:124)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:137)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:33)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.rangeCheck(ArrayList.java:653)
        at java.util.ArrayList.get(ArrayList.java:429)
        at com.epam.parso.impl.SasFileParser.readNext(SasFileParser.java:493)
        ... 31 more
18/02/26 15:44:38 WARN TaskSetManager: Lost task 2.0 in stage 3.0 (TID 11, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:27)
        at com.github.saurfang.sas.parso.SasFileParserWrapper.readNext(ParsoWrapper.scala:80)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:125)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:124)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:137)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:33)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.rangeCheck(ArrayList.java:653)
        at java.util.ArrayList.get(ArrayList.java:429)
        at com.epam.parso.impl.SasFileParser.readNext(SasFileParser.java:493)
        ... 31 more

18/02/26 15:44:38 ERROR TaskSetManager: Task 2 in stage 3.0 failed 1 times; aborting job
18/02/26 15:44:38 ERROR Utils: Aborting task
org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
18/02/26 15:44:38 ERROR FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 3.0 failed 1 times, most recent failure: Lost task 2.0 in stage 3.0 (TID 11, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:27)
        at com.github.saurfang.sas.parso.SasFileParserWrapper.readNext(ParsoWrapper.scala:80)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:125)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:124)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:137)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:33)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.rangeCheck(ArrayList.java:653)
        at java.util.ArrayList.get(ArrayList.java:429)
        at com.epam.parso.impl.SasFileParser.readNext(SasFileParser.java:493)
        ... 31 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1961)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
        at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484)
        at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:500)
        at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:263)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
        at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:404)
        at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:358)
        at $line40.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:32)
        at $line40.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:37)
        at $line40.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:39)
        at $line40.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:41)
        at $line40.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:43)
        at $line40.$read$$iw$$iw$$iw$$iw$$iw.<init>(<console>:45)
        at $line40.$read$$iw$$iw$$iw$$iw.<init>(<console>:47)
        at $line40.$read$$iw$$iw$$iw.<init>(<console>:49)
        at $line40.$read$$iw$$iw.<init>(<console>:51)
        at $line40.$read$$iw.<init>(<console>:53)
        at $line40.$read.<init>(<console>:55)
        at $line40.$read$.<init>(<console>:59)
        at $line40.$read$.<clinit>(<console>)
        at $line40.$eval$.$print$lzycompute(<console>:7)
        at $line40.$eval$.$print(<console>:6)
        at $line40.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
        at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
        at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
        at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
        at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
        at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
        at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
        at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
        at org.apache.spark.repl.Main$.doMain(Main.scala:69)
        at org.apache.spark.repl.Main$.main(Main.scala:52)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:751)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:27)
        at com.github.saurfang.sas.parso.SasFileParserWrapper.readNext(ParsoWrapper.scala:80)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:125)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:124)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:137)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:33)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.rangeCheck(ArrayList.java:653)
        at java.util.ArrayList.get(ArrayList.java:429)
        at com.epam.parso.impl.SasFileParser.readNext(SasFileParser.java:493)
        ... 31 more
18/02/26 15:44:38 ERROR Utils: Aborting task
org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
18/02/26 15:44:38 WARN FileOutputCommitter: Could not delete hdfs://lde0162t.de.top.com:8020/apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000004_0
18/02/26 15:44:38 ERROR FileFormatWriter: Job job_20180226154438_0003 aborted.
18/02/26 15:44:38 ERROR Executor: Exception in task 4.0 in stage 3.0 (TID 13)
org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
18/02/26 15:44:38 ERROR Utils: Aborting task
org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
18/02/26 15:44:38 WARN FileOutputCommitter: Could not delete hdfs://lde0162t.de.top.com:8020/apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000003_0
18/02/26 15:44:38 ERROR FileFormatWriter: Job job_20180226154438_0003 aborted.
18/02/26 15:44:38 ERROR Executor: Exception in task 3.0 in stage 3.0 (TID 12)
org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
18/02/26 15:44:38 WARN DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000000_0/part-00000-1a342208-9aa9-47e8-894e-4dc804177880.snappy.orc (inode 7389846): File does not exist. [Lease.  Holder: DFSClient_NONMAPREDUCE_-802042230_1, pendingcreates: 1]
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3660)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3463)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3301)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3261)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
        at org.apache.hadoop.ipc.Client.call(Client.java:1498)
        at org.apache.hadoop.ipc.Client.call(Client.java:1398)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
        at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1574)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1369)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:558)
18/02/26 15:44:38 WARN FileOutputCommitter: Could not delete hdfs://lde0162t.de.top.com:8020/apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000000_0
18/02/26 15:44:38 ERROR FileFormatWriter: Job job_20180226154438_0003 aborted.
18/02/26 15:44:38 WARN Utils: Suppressing exception in catch: No lease on /apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000000_0/part-00000-1a342208-9aa9-47e8-894e-4dc804177880.snappy.orc (inode 7389846): File does not exist. [Lease.  Holder: DFSClient_NONMAPREDUCE_-802042230_1, pendingcreates: 1]
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3660)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3463)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3301)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3261)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000000_0/part-00000-1a342208-9aa9-47e8-894e-4dc804177880.snappy.orc (inode 7389846): File does not exist. [Lease.  Holder: DFSClient_NONMAPREDUCE_-802042230_1, pendingcreates: 1]
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3660)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3463)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3301)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3261)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
        at org.apache.hadoop.ipc.Client.call(Client.java:1498)
        at org.apache.hadoop.ipc.Client.call(Client.java:1398)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
        at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1574)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1369)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:558)
18/02/26 15:44:38 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 9)
org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
        Suppressed: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154438_0003_m_000000_0/part-00000-1a342208-9aa9-47e8-894e-4dc804177880.snappy.orc (inode 7389846): File does not exist. [Lease.  Holder: DFSClient_NONMAPREDUCE_-802042230_1, pendingcreates: 1]
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3660)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3463)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3301)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3261)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

                at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
                at org.apache.hadoop.ipc.Client.call(Client.java:1498)
                at org.apache.hadoop.ipc.Client.call(Client.java:1398)
                at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
                at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
                at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:459)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:498)
                at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
                at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
                at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
                at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
                at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1574)
                at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1369)
                at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:558)
org.apache.spark.SparkException: Job aborted.
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:147)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
  at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
  at org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:484)
  at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:500)
  at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:263)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
  at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:404)
  at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:358)
  ... 50 elided
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 3.0 failed 1 times, most recent failure: Lost task 2.0 in stage 3.0 (TID 11, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:27)
        at com.github.saurfang.sas.parso.SasFileParserWrapper.readNext(ParsoWrapper.scala:80)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:125)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:124)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:137)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:33)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
        at java.util.ArrayList.rangeCheck(ArrayList.java:653)
        at java.util.ArrayList.get(ArrayList.java:429)
        at com.epam.parso.impl.SasFileParser.readNext(SasFileParser.java:493)
        ... 31 more

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1961)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
  ... 82 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:99)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
  at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:27)
  at com.github.saurfang.sas.parso.SasFileParserWrapper.readNext(ParsoWrapper.scala:80)
  at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:125)
  at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:124)
  at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:137)
  at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:33)
  at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:266)
  at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:211)
  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
  at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
  ... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
  at java.util.ArrayList.get(ArrayList.java:429)
  at com.epam.parso.impl.SasFileParser.readNext(SasFileParser.java:493)
  ... 31 more

scala> 18/02/26 15:44:39 ERROR Utils: Aborting task
org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
18/02/26 15:44:39 WARN FileOutputCommitter: Could not delete hdfs://lde0162t.de.top.com:8020/apps/hive/warehouse/bicc.db/tbdl_gdgdra_drtv/_temporary/0/_temporary/attempt_20180226154439_0003_m_000001_0
18/02/26 15:44:39 ERROR FileFormatWriter: Job job_20180226154439_0003 aborted.
18/02/26 15:44:39 ERROR Executor: Exception in task 1.0 in stage 3.0 (TID 10)
org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.TaskKilledException
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1353)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
        ... 8 more

Issue : The tab space after the string is removed after reading in by the jar

The tab space after any string value in a column is removed after reading in by the jar. The length of the below data frame is different.

+---------+------------+-----------------+------------+
|Text_Only|Text_Tab_Beg| Text_Tab_Mid|Text_Tab_End|
+---------+------------+-----------------+------------+
| ABCDEFGH| ABCDEFGH|ABCDEFGH IJKLMNOP| ABCDEFGH|
+---------+------------+-----------------+------------+

Function.length gives below results

Text_Only=8
Text_Tab_Beg=9
Text_Tab_End=8

The input data is thus changed removing the end tab space

Splitting Internally Compressed sas7bdat

There appears to be an issue reading internally compressed sas7bdat as discussed in #32. This is a recap of what we know so far about the issue and what is required to identify the root cause and possible fix.

Background

sas7bdat is a binary data storage file format used by SAS. There is no public documentation about this file format and different versions of SAS appeared to have evolved the file format over the years. The best documentation of sas7bdat can be found at SAS7BDAT Database Binary Format. However, this shall be taken with a grain of salt since it does not accurately reflect the latest revision nor the internal compression used in sas7bdat.
On a high level, sas7bdat stores data in pages and each page contains rows of serialized data. This is the basis for spark-sas7bdat package which splits dataset to process in parallel. Internally, spark-sas7bdat delegates deserialization to parso Java library, which does an amazing job deserializing sas7bdat file sequentially.

Problem

spark-sas7bdat contains unit tests to verify sas7bdat can indeed by split and read correctly as a DataFrame. However, there have been reports that it fails for many datasets in the wild. See #32.

It has been verified that parso can read these problematic files just fine (#32 (comment)). Therefore it is likely that the bug would be where we determine is the appropriate splits to divide the sas7bdat file for parallel processing. https://github.com/saurfang/spark-sas7bdat/blob/master/src/main/scala/com/github/saurfang/sas/mapred/SasRecordReader.scala since everything else is just a thin wrapper over Parso thanks to @mulya 's contribution in #10 .

Furthermore, it is likely this issue only happens in certain version of sas7bdat or in sas7bdat files that enable internally compression. Recall externally compressed file is only not splittable (e.g. gzip.) and we don't support parallel read in spark-sas7bdat.

Proposal

Build Test Case

We first need to collect dataset that exhibits the said issue.

@vivard has provided one: #32 (comment)
@nelson Where do you get your problematic dataset? Can you generate a dummy dataset that exhibits the same problem?

By setting a very low block size in unit test, we can force Spark to split the input data and hopefully trigger the error.

Debug Test Case

It can be fairly convoluted to debug the splitting logic in Spark and Hadoop. One potential way to debug this is to create a local multi-threaded Parso reader, without Spark, to replicate and validate the splitting logic.

See parso's unit test here where we read row by row from sas7bdat and write to csv.
The idea would be to generalize this by looking at number of pages in the sas7bdat file, split pages into chunks, create separate input stream for each chunk, seek the input stream to the page starting location, and process rows from all pages in the chunk. The splitting logic can be refactored from here in this package. Since the issue is unlikely to be related to concurrency because Spark executor creates separate input streams, one can run the above logic sequentially for each chunk, which should be easier to debug.

This might help identify and isolate the issue. We might also discover functions, interfaces, and encapsulation that can be contributed back to Parso, which could greatly simplifies this package.

Uneven split partitions spark-sas7bdat

we have carried out test for different .sas7bdat file sizes to load into parquet format.
first Dataset ~ 420 MB (4 partition ,4 tasks ),
second dataset ~ 850MB(7 partitions , 7 tasks) considering spark's default partitions sized 128MB for optimal performance.

In Output parquet files observed that only first or one partitions having part of data and remaining partitions are having empty files.
For small files < 128 MB , it works fine but as file size increases > 128MB we observed this issue
Can you please help or guide us to resolve this issue.

Originally posted by @kraj007 in https://github.com/saurfang/spark-sas7bdat/issue_comments#issuecomment-432381210

Problems when compiling spark-sas7bdat with scala 2.11.6

There is a small room for a misconfigured environment, but when I compile sas7bdat with scala 2.10.5 it works all properly.

The following error occurs when trying to call an action on a DataFrame from sas7bdat using 2.11.6. After this, java daemon is dead and I have to restart the interpreter.

rg.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NoSuchMethodError: scala.runtime.BooleanRef.zero()Lscala/runtime/BooleanRef; at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala) at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:20) at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:248) at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:216) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at scala.collection.AbstractIterator.to(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143) at org.apache.spark.sql.execution.SparkPlan$$anonfun$3.apply(SparkPlan.scala:143) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

java.lang.StringIndexOutOfBoundsException: String index out of range

Trying to load a file using PySpark with

df = sqlContext.read.format("com.github.saurfang.sas.spark").load("file.sas7bdat")

Get the following error:

py4j.protocol.Py4JJavaError: An error occurred while calling o189.load.
: java.lang.StringIndexOutOfBoundsException: String index out of range: 32721
	at java.lang.String.substring(String.java:1951)
	at com.ggasoftware.parso.SasFileParser$ColumnNameSubheader.processSubheader(SasFileParser.java:723)
	at com.ggasoftware.parso.SasFileParser.processPageMetadata(SasFileParser.java:466)
	at com.ggasoftware.parso.SasFileParser.processSasFilePageMeta(SasFileParser.java:436)
	at com.ggasoftware.parso.SasFileParser.getMetadataFromSasFile(SasFileParser.java:360)
	at com.ggasoftware.parso.SasFileParser.<init>(SasFileParser.java:280)
	at com.ggasoftware.parso.SasFileParser$Builder.build(SasFileParser.java:264)
	at com.ggasoftware.parso.SasFileReader.<init>(SasFileReader.java:41)
	at com.github.saurfang.sas.spark.SasRelation.inferSchema(SasRelation.scala:98)
	at com.github.saurfang.sas.spark.SasRelation.<init>(SasRelation.scala:32)
	at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:34)
	at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:23)
	at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:11)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:315)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:280)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:211)
	at java.lang.Thread.run(Thread.java:745)

Job aborted due to stage failure

I tried load random.sas7bdat(from the resource) with both scala and sparkr, got same error when I tried to print the row acount of the df(df.count for scala and nrow(df) for sparkr)

the version of spark is 1.6.2

> sqlContext <- sparkRSQL.init(sc)
> df <- loadDF(sqlContext,"e:/temp/random.sas7bdat", "com.github.saurfang.sas.spark")
> cache(df)
DataFrame[x:double, f:double]
> printSchema(df)
root
 |-- x: double (nullable = true)
 |-- f: double (nullable = true)
> nrow(df)
Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) : 
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NullPointerException
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
    at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)
    at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)
    at org.apache.spark.util.Utils$.fetchFile(Utils.scala:407)
    at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:430)
    at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:422)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLi

Cannot read file from Azure Blob Path

I attempted to read a SAS dataset from an Azure Blob path, but I am getting the following error:

shaded.databricks.org.apache.hadoop.fs.azure.AzureException: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: Container <my-container> in account <my-account>.blob.core.windows.net not found, and we can&apos;t create it using anoynomous credentials, and no credentials found for them in the configuration.

I have configured my credentials using:
spark.conf.set( "fs.azure.account.key.{}.blob.core.windows.net".format(account_name), account_key )

And I can correctly ls the path I'm trying to read from using dbutils.fs.ls. So it seems as though the error is with the package.

Package: 2.1.0-s_2.11
Platform: Databricks
Spark Version: Latest stable (Scala 2.11)
Python Version: 3.5

Parso error

Hello. I just stumbled on this repo.

In case you don't know, there are cases where Parso silently does the wrong thing:
BioStatMatt/sas7bdat.parso#5

No news since July on when it's getting fixed.

Harry

1.3 support

Running into this..

java.lang.NoClassDefFoundError: org/apache/spark/sql/types/AtomicType
at com.github.saurfang.sas.spark.SasRelation.inferSchema(SasRelation.scala:111)

Digging around, I see murmers that AtomicType was introduced in 1.4. Does sas7bdat support 1.3? The readme seems to indicate it does.

Thanks!

Using spark-sas7bdat with Spark-1.5.0 yields java.lang.ClassCastException

I can create a dataframe from a SAS file suing spark-shell with your package using val df = sqlContext.sasFile("data/sample3.sas7bdat"). It returns a org.apache.spark.sql.DataFrame = [String: string, Integer: double, Float: double].

But calling df.count yields in an exception: java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.GenericMutableRow cannot be cast to org.apache.spark.sql.Row.

Any ideas what's going wrong here?

SAS7BDAT read issue (java.util.Arrays$ArrayList cannot be cast to java.util.Set)

I get the following error when I execute the following:

df = sqlContext.read.format("com.github.saurfang.sas.spark").load('sas7bdat path')

Any idea what may be causing this issue?

error:

py4j.protocol.Py4JJavaError: An error occurred while calling o66.load.
: java.lang.ClassCastException: java.util.Arrays$ArrayList cannot be cast to java.util.Set
at com.github.saurfang.sas.parso.ParsoWrapper$.DATE_FORMAT_STRINGS$lzycompute(ParsoWrapper.scala:39)
at com.github.saurfang.sas.parso.ParsoWrapper$.DATE_FORMAT_STRINGS(ParsoWrapper.scala:36)
at com.github.saurfang.sas.spark.SasRelation.inferSchema(SasRelation.scala:201)
at com.github.saurfang.sas.spark.SasRelation.(SasRelation.scala:62)
at com.github.saurfang.sas.spark.SasRelation$.apply(SasRelation.scala:43)
at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:209)
at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:42)
at com.github.saurfang.sas.spark.DefaultSource.createRelation(DefaultSource.scala:27)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)

Issue with the version of Hadoop

Hello,
I am trying to use this library using the databricks cloud to parse a SAS file of the format ".sas7bdat" but I am unable to do it as the version of hadoop being used by the databricks cloud is 1.2.1 and not sure what version this package is supported.
I am attaching the code and also the output of the code.

Lastly how do I use this library on a spark cluster that is deployed on amazon ec2 or google cloud cluster..I could not find any information on it.
It would be greatly appreciated if you could help me installing and parsing the SAS file on my spark cluster.
capture_error

Hive SQL functions not registered when called through sparklyr

I'm trying to use Hive date functions in sparklyr::sdf_sql to manipulate some data, however some of these return errors that the function is not registered in the database. This only occurs after installation of spark-sas7bdat on the cluster. Note that I've duplicated this issue with sparklyr as I'm not sure which team would own this.
Reproducible example below:


>library(sparklyr)
>library(dplyr)
Attaching package:dplyrThe following objects are masked frompackage:stats:

    filter, lag

The following objects are masked frompackage:base:

    intersect, setdiff, setequal, union

>sc <- spark_connect(method="databricks")
>dat <- data.frame(person=rep(c(1:3),3), measure=rnorm(9))
>src_tbls(sc)
[1] "netprice_092018"        "netprice_42018"         "test_netprice_external"
[4] "test_table"  

>dat <- data.frame(person=rep(c(1:3),3), measure=rnorm(9))
>dat_sparkly <- copy_to(sc, dat, "dat_sparkly") #Gives error, but "dat_sparkly" is sent to Spark (see next command). Same root cause as other errors below?
Error : org.apache.spark.sql.AnalysisException: Undefined function: 'count'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 7 (NOTE: If you wish to use SparkR, import it by calling 'library(SparkR)'.)

#copy_to gives error, however table is correctly sent to Spark:
>src_tbls(sc)
[1] "dat_sparkly"            "netprice_092018"        "netprice_42018"        
[4] "test_netprice_external" "test_table"

>sdf_sql(sc, "select * from dat_sparkly") #Works
# Source: spark [?? x 2]
  person measure
*     
1      1  -0.354
2      2  -0.197
3      3  -0.747
4      1   0.118
5      2  -0.742
6      3  -0.430
7      1  -2.55 
8      2   0.886
9      3  -0.713

>sdf_sql(sc, "select current_date from dat_sparkly") #Works
# Source: spark [?? x 1]
  `current_date()`
*           
1 2018-10-12      
2 2018-10-12      
3 2018-10-12      
4 2018-10-12      
5 2018-10-12      
6 2018-10-12      
7 2018-10-12      
8 2018-10-12      
9 2018-10-12 

>sdf_sql(sc, "select date_format(current_date,'E') as week from dat_sparkly") #FAILS

Error : org.apache.spark.sql.AnalysisException: Undefined function: 'date_format'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 7

Session info below:

>devtools::session_info()
Session info ------------------------------------------------------------------
Packages ----------------------------------------------------------------------
 setting  value                       
 version  R version 3.4.4 (2018-03-15)
 system   x86_64, linux-gnu           
 ui       X11                         
 language (EN)                        
 collate  en_US.UTF-8                 
 tz       Etc/UTC                     
 date     2018-10-12                  

 package       * version date       source        
 assertthat      0.2.0   2017-04-11 CRAN (R 3.4.4)
 backports       1.1.2   2017-12-13 CRAN (R 3.4.4)
 base          * 3.4.4   2018-03-16 local         
 base64enc       0.1-3   2015-07-28 CRAN (R 3.4.4)
 bindr           0.1.1   2018-03-13 CRAN (R 3.4.4)
 bindrcpp        0.2.2   2018-03-29 CRAN (R 3.4.4)
 broom           0.4.4   2018-03-29 CRAN (R 3.4.4)
 cli             1.0.0   2017-11-05 CRAN (R 3.4.4)
 compiler        3.4.4   2018-03-16 local         
 config          0.3     2018-03-27 CRAN (R 3.4.4)
 crayon          1.3.4   2017-09-16 CRAN (R 3.4.4)
 datasets      * 3.4.4   2018-03-16 local         
 DBI             0.8     2018-03-02 CRAN (R 3.4.4)
 dbplyr          1.2.2   2018-07-25 CRAN (R 3.4.4)
 devtools        1.13.5  2018-02-18 CRAN (R 3.4.4)
 digest          0.6.15  2018-01-28 CRAN (R 3.4.4)
 dplyr         * 0.7.4   2017-09-28 CRAN (R 3.4.4)
 foreign         0.8-70  2018-04-23 CRAN (R 3.4.4)
 forge           0.1.0   2018-08-31 CRAN (R 3.4.4)
 glue            1.2.0   2017-10-29 CRAN (R 3.4.4)
 graphics      * 3.4.4   2018-03-16 local         
 grDevices     * 3.4.4   2018-03-16 local         
 grid            3.4.4   2018-03-16 local         
 htmltools       0.3.6   2017-04-28 CRAN (R 3.4.4)
 htmlwidgets     1.3     2018-09-30 CRAN (R 3.4.4)
 httpuv          1.4.5   2018-07-19 CRAN (R 3.4.4)
 httr            1.3.1   2017-08-20 CRAN (R 3.4.4)
 hwriter         1.3.2   2014-09-10 CRAN (R 3.4.4)
 hwriterPlus     1.0-3   2015-01-05 CRAN (R 3.4.4)
 jsonlite        1.5     2017-06-01 CRAN (R 3.4.4)
 later           0.7.5   2018-09-18 CRAN (R 3.4.4)
 lattice         0.20-35 2017-03-25 CRAN (R 3.3.3)
 lazyeval        0.2.1   2017-10-29 CRAN (R 3.4.4)
 magrittr        1.5     2014-11-22 CRAN (R 3.4.4)
 memoise         1.1.0   2017-04-21 CRAN (R 3.4.4)
 methods       * 3.4.4   2018-03-16 local         
 mime            0.5     2016-07-07 CRAN (R 3.4.4)
 mnormt          1.5-5   2016-10-15 CRAN (R 3.4.4)
 nlme            3.1-137 2018-04-07 CRAN (R 3.4.4)
 parallel        3.4.4   2018-03-16 local         
 pillar          1.2.1   2018-02-27 CRAN (R 3.4.4)
 pkgconfig       2.0.1   2017-03-21 CRAN (R 3.4.4)
 plyr            1.8.4   2016-06-08 CRAN (R 3.4.4)
 promises        1.0.1   2018-04-13 CRAN (R 3.4.4)
 psych           1.8.3.3 2018-03-30 CRAN (R 3.4.4)
 purrr           0.2.4   2017-10-18 CRAN (R 3.4.4)
 r2d3            0.2.2   2018-05-30 CRAN (R 3.4.4)
 R6              2.2.2   2017-06-17 CRAN (R 3.4.4)
 Rcpp            0.12.16 2018-03-13 CRAN (R 3.4.4)
 reshape2        1.4.3   2017-12-11 CRAN (R 3.4.4)
 rlang           0.2.0   2018-02-20 CRAN (R 3.4.4)
 rprojroot       1.3-2   2018-01-03 CRAN (R 3.4.4)
 Rserve          1.7-3   2013-08-21 CRAN (R 3.4.4)
 rstudioapi      0.7     2017-09-07 CRAN (R 3.4.4)
 shiny           1.1.0   2018-05-17 CRAN (R 3.4.4)
 sparklyr      * 0.9.1   2018-09-27 CRAN (R 3.4.4)
 SparkR          2.3.1   2018-10-12 local         
 stats         * 3.4.4   2018-03-16 local         
 stringi         1.1.7   2018-03-12 CRAN (R 3.4.4)
 stringr         1.3.0   2018-02-19 CRAN (R 3.4.4)
 TeachingDemos   2.10    2016-02-12 CRAN (R 3.4.4)
 tibble          1.4.2   2018-01-22 CRAN (R 3.4.4)
 tidyr           0.8.0   2018-01-29 CRAN (R 3.4.4)
 tools           3.4.4   2018-03-16 local         
 utf8            1.1.3   2018-01-03 CRAN (R 3.4.4)
 utils         * 3.4.4   2018-03-16 local         
 withr           2.1.2   2018-03-15 CRAN (R 3.4.4)
 xtable          1.8-3   2018-08-29 CRAN (R 3.4.4)
 yaml            2.2.0   2018-07-25 CRAN (R 3.4.4)

Not able to read all records from SAS dataset.

Hi Team,
I am trying to use this package and able to read the sas data into spark as DataFrame. However, not able to read entire sas data into dataframe. when I using version (1.1.5-s_2.11), I am not able to read the last 3 rows from the sas dfataset. PFB few scenarios

  1. sas dataset has 100 records, df.show(97) works fine. However, df.show(98) hangs.
  2. sas dataset has 50 records, df.show(47) works fine. However, df.show(48) hangs.
  3. sas dataset has 19 records, df.show(16) works fine. However, df.show(17) hangs.

similaryly, for package : 1.1.5-s_2.11, last 2 records are creating an issue.

  1. sas dataset has 100 records, df.show(98) works fine. However, df.show(99) hangs.
  2. sas dataset has 50 records, df.show(48) works fine. However, df.show(49) hangs.
  3. sas dataset has 19 records, df.show(17) works fine. However, df.show(18) hangs.

Could you please help here to resolve this issue.

Looking forward to hear from team.

Regards,
Neeraj

SasFileParser: Failed to read page from file

Hi

I have uploaded a sas7bdat file into hdfs, and am trying to use the sas7bdat library to read this file into a spark dataframe (using pyspark)

Code is quite straight forward:

image

ps....I can confirm that the file exists:

image

Unable to load sasdata set into Spark 2.3.0

Hi
We have a spark cluster with Spark 2.3.0.
The jar spark-sas7bdat-2.1.0-s_2.11.jar is not working for Spark 2.3.0 and it seems it will work for Spark 2.2.0
Please suggest if there is a work around for Spark 2.3.0 or we have to downgrade spark version to Spark 2.2.0

Bintray Returning 403 Forbidden Request for Entire Spark Packages

If I go here: http://dl.bintray.com/spark-packages/maven/saurfang/spark-sas7bdat/3.0.0-s_2.12/spark-sas7bdat-3.0.0-s_2.12.jar

image

It returns a 403 forbidden response.

Additionally, I went to the highest level of the website (https://dl.bintray.com/) and it seems that it does not work as well. I am not sure who to reach out to about this (new to posting issues), but I wanted to bring this up in case you may know of a different location that apps can access the jars on the internet.

Apologies in advance if I have done things wrong here by posting.

Support reading multiple SAS files, and merging of schema.

We should allow multiple sas files to be read into a single dataframe, merging their schema if possible.

This is a lot easier with Parso 2.0.10, as we can specify which column names we want to read, and in what order, when calling readNext().

My recommendation is that we copy how core spark merges schema, which is effectively taking the union of all columns, and filling nulls for rows from files without a given column, and throw an error for columns with the same name, but incompatible types, (e.g. string & Int).

The only issue with this is that as we infer types from SAS column formatters, if the user has inferDecimal enabled, and they have changed column precision between two files, the merge will fail.

Parquet Example:

spark.read.option("mergeSchema", "true").parquet("table*.parquet")

Implicit import not working in spark 1.4.0

I can't get to work:
val df = sqlContext.read.format("com.github.saurfang.sas.spark").load("cars.sas7bdat")

Getting:
java.lang.RuntimeException: Failed to load class for data source: com.github.saurfang.sas.spark

Working fine if importing in the explicit way:
import com.github.saurfang.sas.spark._
and using sasFile()

Other modules such as spark-csv are working fine with the load.format() method, to ensure it is not a local issue.

Not all records from SAS file being loaded into DataFrame

I'm running into an issue where the DataFrame is missing a small percentage (0.05%) of records when using the library to load from a SAS file. I ran a test just using the Parso library, and it was able to read all of the records. Additionally, I tried using the sas7bdat Python library and that was able to read all of the data as well.

BufferUnderflowException at SasFileParser.bytesToDate

I'm getting an error while trying to load in a 205MB sas7bdat file with
spark.read.format("com.github.saurfang.sas.spark").load(args(1)). Unfortunately, I cannot share the sas file as it is confidential.

17/06/26 11:18:53 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-172-31-27-210.ec2.internal, executor 1): java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:797)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:797)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.BufferUnderflowException
        at java.nio.Buffer.nextGetIndex(Buffer.java:498)
        at java.nio.HeapByteBuffer.getDouble(HeapByteBuffer.java:508)
        at com.ggasoftware.parso.SasFileParser.bytesToDate(SasFileParser.java:1304)
        at com.ggasoftware.parso.SasFileParser.processByteArrayWithData(SasFileParser.java:1106)
        at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:887)
        ... 32 more

17/06/26 11:18:54 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, ip-172-31-27-210.ec2.internal, executor 1): java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:797)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:797)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.BufferUnderflowException
        at java.nio.Buffer.nextGetIndex(Buffer.java:498)
        at java.nio.HeapByteBuffer.getDouble(HeapByteBuffer.java:508)
        at com.ggasoftware.parso.SasFileParser.bytesToDate(SasFileParser.java:1304)
        at com.ggasoftware.parso.SasFileParser.processByteArrayWithData(SasFileParser.java:1106)
        at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:887)
        ... 32 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1669)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1624)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1613)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1893)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1906)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
        at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2193)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
        at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2546)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2192)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2199)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1935)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1934)
        at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2576)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:1934)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:2149)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:526)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:486)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:495)
        at ingestion.Main$.main(Main.scala:28)
        at ingestion.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
        at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
        at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:797)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:797)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.BufferUnderflowException
        at java.nio.Buffer.nextGetIndex(Buffer.java:498)
        at java.nio.HeapByteBuffer.getDouble(HeapByteBuffer.java:508)
        at com.ggasoftware.parso.SasFileParser.bytesToDate(SasFileParser.java:1304)
        at com.ggasoftware.parso.SasFileParser.processByteArrayWithData(SasFileParser.java:1106)
        at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:887)
        ... 32 more

Issues with very wide datasets

There is a bug in parso or spark-sas7bdat packages when it reads a really wide file. When I read a sas7bdat with only 18 columns, it works perfectly, but when I read my sas7bdat with ~13000 columns, it breaks. For some reason, any of the values that aren’t zeros are replaced with null.

Case sensitivity in column names

Since 2.0, Spark is case-sensitive in column-names, it might make sense to provide users with a flag for spark-sas7bdat to force inferred column names to lower case, as SAS itself is case-insensitive.

What are people's thoughts?

Failure Writing 20GB+ Files

I've been running into failures with certain files that are larger than 20 GB in size. Specifically, these errors come in two varieties:

  1. When writing the data as a CSV, a FileAlreadyExists exception will be thrown because it is attempting to write a file that has already been written.
  2. Sometimes, a java.io.IOException will be thrown with the message There are no available bytes in the input stream..

I'm loading data from a SAS file into a DataFrame, then writing it out as a CSV. A sample of what I'm doing is below:

val df = spark.read.format("com.github.saurfang.sas.spark")
    .option("header", true)
    .load("s3://bucket/file.sas7bdat")

df.write.csv("s3://bucket/output")

java.lang.ArrayIndexOutOfBoundsException

I get this error loading a not-small dataset. I filed the main issue report over on @epam's parso site, but am wondering if there's something about this project's usage of parso that could be related or worked around. (as in, would turning off compression make this dataset splittable and work around this problem?)

Stack trace is
Caused by: java.lang.ArrayIndexOutOfBoundsException
at java.util.Arrays.copyOfRange(Arrays.java:3521)
at com.epam.parso.impl.SasFileParser.getBytesFromFile(SasFileParser.java:717)
at com.epam.parso.impl.SasFileParser.readSubheaderSignature(SasFileParser.java:403)
at com.epam.parso.impl.SasFileParser.processPageMetadata(SasFileParser.java:372)
at com.epam.parso.impl.SasFileParser.processNextPage(SasFileParser.java:566)
at com.epam.parso.impl.SasFileParser.readNextPage(SasFileParser.java:544)
... 45 more

Unable to load sasdata set into Spark

Using below code to load a sample sasdata set into spark and getting a timeout error, tried increasing the timeout using option 'metadataTimeout' but still timing out reading metadata, any help is appreciated

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /Spark/spark-2.4.1-bin-hadoop2.7/jars/spark-sas7bdat-2.1.0-s_2.11.jar pyspark-shell'
from pyspark.sql import SparkSession

spark = SparkSession.builder.master("local").appName("Spark App").getOrCreate()
df = spark.read.format("com.github.saurfang.sas.spark").load("airline.sas7bdat", forceLowercaseNames=True, inferLong=True)

Error:
Py4JJavaError: An error occurred while calling o226.load.
: java.util.concurrent.TimeoutException: Timed out after 60 sec while reading file metadata, file might be corrupt. (Change timeout with 'metadataTimeout' paramater)

java.lang.reflect.InvocationTargetException

Hello,

I am trying to load a sas7bdat file but I get this exception, and I am being unable to debug it since I am not being able to unwrap the java exception.

It seems to throw it when already having a DataFrame since latest output is:
ffgdirect: org.apache.spark.sql.DataFrame = [asd_dt: string, industry_seg: string,...

Any ideas?

at sun.reflect.GeneratedMethodAccessor104.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11) at com.github.saurfang.sas.mapred.SasRecordReader.(SasRecordReader.scala:110) at com.github.saurfang.sas.mapred.SasInputFormat.getRecordReader(SasInputFormat.scala:15) at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:239) at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216) at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69) at org.apache.spark.rdd.RDD.iterator(RDD.scala:242) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

Error parsing SAS file distributed in HDFS

A SAS file downloaded from "http://nces.ed.gov/ccd/Data/zip/ag121a_supp_sas.zip" (48.59 MB) was distributed in HDFS with Block Size 32MB.
If I take 5 elements and print them, I can show them. Otherwise If I count the number of rows, I will get the next error.
(If I run the same code with the file uploaded wiht BlockSize = 128MB everything works fine).
The code in Scala is as followed:

import org.apache.spark.sql.SQLContext
import com.github.saurfang.sas.spark._
val sq = new SQLContext(sc)
val sasData = sq.read.format("com.github.saurfang.sas.spark").load("hdfs://XXX.XX.XX:8020/sas/ag121a_supp.sas7bdat")
sasData.take(5).foreach(println)
sasData.count()

The message error is the next:
Name: org.apache.spark.SparkException
Message: Job aborted due to stage failure: Task 1 in stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 1.0 (TID 5, ip-XXXeu-west-1.compute.internal): java.lang.reflect.InvocationTargetException
    at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
    at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:117)
    at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:116)
    at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:129)
    at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
    at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:248)
    at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:216)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1.org$apache$spark$sql$execution$aggregate$TungstenAggregate$$anonfun$$executePartition$1(TungstenAggregate.scala:97)
    at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:119)
    at org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:119)
    at org.apache.spark.rdd.MapPartitionsWithPreparationRDD.compute(MapPartitionsWithPreparationRDD.scala:64)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
    at java.util.ArrayList.rangeCheck(ArrayList.java:635)
    at java.util.ArrayList.get(ArrayList.java:411)
    at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:876)
    ... 32 more

Driver stacktrace:
StackTrace: org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
scala.Option.foreach(Option.scala:236)
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
org.apache.spark.rdd.RDD.collect(RDD.scala:908)
org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:177)
org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1903)
org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1384)
org.apache.spark.sql.DataFrame.count(DataFrame.scala:1402)
$line23.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:23)
$line23.$read$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:28)
$line23.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
$line23.$read$$iwC$$iwC$$iwC.<init>(<console>:32)
$line23.$read$$iwC$$iwC.<init>(<console>:34)
$line23.$read$$iwC.<init>(<console>:36)
$line23.$read.<init>(<console>:38)
$line23.$read$.<init>(<console>:42)
$line23.$read$.<clinit>(<console>)
$line23.$eval$.<init>(<console>:7)
$line23.$eval$.<clinit>(<console>)
$line23.$eval.$print(<console>)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1$$anonfun$apply$3.apply(ScalaInterpreter.scala:356)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1$$anonfun$apply$3.apply(ScalaInterpreter.scala:351)
org.apache.toree.global.StreamState$.withStreams(StreamState.scala:81)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1.apply(ScalaInterpreter.scala:350)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1.apply(ScalaInterpreter.scala:350)
org.apache.toree.utils.TaskManager$$anonfun$add$2$$anon$1.run(TaskManager.scala:140)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

My specs:

Spark 1.5.2
Scala 2.10
spark-sas7bdat 1.1.4-s_2.10
parso-1.2.1

BR

java.lang.ClassCastException on df.count()

I loaded a sas dataset into a dataframe without incident, and can print the schema. However, when I do anything with it (like count() or save()) I get a cast exception (trimmed stack below, let me know if more is helpful). It's a small dataset, 1.6GB. I did notice that the latest maven spark-sas7bdat doesn't pull in the latest parso (2.0.9). The only bug apparently fixed in parso 2.0.9 seems not to be related to what I'm seeing

Command in spark shell:
spark.read.format("com.github.saurfang.sas.spark").load("hdfs://nameservice1/users/bob/test.sas7bdat").count()

This is on Cloudera spark 2.2
libraryDependencies += "saurfang" % "spark-sas7bdat" % "2.0.0-s_2.11"


Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2284)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
......

Decompression of sas7bdat.bz2 file is not distributed across worker nodes

Hello,

I have been experimenting with the bz2 decompression functionality in the repo's master branch which isn't part of your last release. When a bz2 compressed file is read, the decompression seems to be happening on one worker node only. Is it possible to parallelise the decompression of externally compressed files?

Thanks in advance for your response.

Issue reading filesize greater than 512 MB

I am trying to read sas files with tihs package . With smaller file size it is working fine , But filesize greater then 512 MB it is giving me below error
ERROR InsertIntoHadoopFsRelationCommand: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 8, 10.0.0.5): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:876)
... 32 more

17/04/04 10:47:32 INFO TaskSetManager: Starting task 3.1 in stage 0.0 (TID 4, 10.0.0.5, partition 3, PROCESS_LOCAL, 5573 bytes)
17/04/04 10:47:32 INFO YarnSchedulerBackend$YarnDriverEndpoint: Launching task 4 on executor id: 1 hostname: 10.0.0.5.
17/04/04 10:47:32 WARN TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, 10.0.0.5): java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
at com.github.saurfang.sas.mapred.SasRecordReader.(SasRecordReader.scala:111)
at com.github.saurfang.sas.mapred.SasInputFormat.getRecordReader(SasInputFormat.scala:15)
at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:245)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.StackOverflowError
at com.ggasoftware.parso.SasFileParser.readPageHeader(SasFileParser.java:949)
at com.ggasoftware.parso.SasFileParser.readNextPage(SasFileParser.java:928)

   .java:935)
    at com.ggasoftware.parso.SasFileParser.readNextPage(SasFileParser.java:935)

17/04/04 10:47:32 INFO TaskSetManager: Starting task 2.1 in stage 0.0 (TID 5, 10.0.0.5, partition 2, PROCESS_LOCAL, 5573 bytes)
17/04/04 10:47:32 INFO YarnSchedulerBackend$YarnDriverEndpoint: Launching task 5 on executor id: 2 hostname: 10.0.0.5.
17/04/04 10:47:33 INFO TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) on executor 10.0.0.5: java.lang.reflect.InvocationTargetException (null) [duplicate 1]
17/04/04 10:47:33 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 6, 10.0.0.5, partition 1, PROCESS_LOCAL, 5573 bytes)
17/04/04 10:47:33 INFO YarnSchedulerBackend$YarnDriverEndpoint: Launching task 6 on executor id: 1 hostname: 10.0.0.5.
17/04/04 10:47:33 INFO TaskSetManager: Lost task 3.1 in stage 0.0 (TID 4) on executor 10.0.0.5: org.apache.spark.SparkException (Task failed while writing rows) [duplicate 1]
17/04/04 10:47:33 INFO TaskSetManager: Starting task 3.2 in stage 0.0 (TID 7, 10.0.0.5, partition 3, PROCESS_LOCAL, 5573 bytes)
17/04/04 10:47:33 INFO YarnSchedulerBackend$YarnDriverEndpoint: Launching task 7 on executor id: 1 hostname: 10.0.0.5.
17/04/04 10:47:34 INFO TaskSetManager: Lost task 3.2 in stage 0.0 (TID 7) on executor 10.0.0.5: org.apache.spark.SparkException (Task failed while writing rows) [duplicate 2]
17/04/04 10:47:34 INFO TaskSetManager: Starting task 3.3 in stage 0.0 (TID 8, 10.0.0.5, partition 3, PROCESS_LOCAL, 5573 bytes)
17/04/04 10:47:34 INFO YarnSchedulerBackend$YarnDriverEndpoint: Launching task 8 on executor id: 1 hostname: 10.0.0.5.
17/04/04 10:47:35 INFO TaskSetManager: Lost task 3.3 in stage 0.0 (TID 8) on executor 10.0.0.5: org.apache.spark.SparkException (Task failed while writing rows) [duplicate 3]
17/04/04 10:47:35 ERROR TaskSetManager: Task 3 in stage 0.0 failed 4 times; aborting job
17/04/04 10:47:35 INFO YarnScheduler: Cancelling stage 0
17/04/04 10:47:35 INFO YarnScheduler: Stage 0 was cancelled
17/04/04 10:47:35 INFO DAGScheduler: ResultStage 0 (save at Wordcount.scala:52) failed in 8.345 s
17/04/04 10:47:35 INFO DAGScheduler: Job 0 failed: save at Wordcount.scala:52, took 8.458114 s
17/04/04 10:47:35 ERROR InsertIntoHadoopFsRelationCommand: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 8, 10.0.0.5): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:876)
... 32 more

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1906)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:525)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
at com.dataflair.spark.Wordcount$.main(Wordcount.scala:52)
at com.dataflair.spark.Wordcount.main(Wordcount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:876)
... 32 more
17/04/04 10:47:35 WARN AzureFileSystemThreadPoolExecutor: Disabling threads for Delete operation as thread count 0 is <= 1
17/04/04 10:47:35 ERROR AzureNativeFileSystemStore: Encountered Storage Exception for delete on Blob: Exception Details: The specified blob does not exist. Error Code: BlobNotFound
17/04/04 10:47:35 WARN AzureFileSystemThreadPoolExecutor: Failed to Delete file
:rwxrwxrwx]
17/04/04 10:47:35 ERROR NativeAzureFileSystem: Failed to delete files / subfolders in blob
test/sourceoutput/epei.csv/_temporary
17/04/04 10:47:35 ERROR DefaultWriterContainer: Job job_201704041047_0000 aborted.
Exception in thread "main" org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:149)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:525)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
at com.dataflair.spark.Wordcount$.main(Wordcount.scala:52)
at com.dataflair.spark.Wordcount.main(Wordcount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 8, 10.0.0.5): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:876)
... 32 more

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1906)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:143)
... 29 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.github.saurfang.sas.util.PrivateMethodCaller.apply(PrivateMethodExposer.scala:11)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$lzycompute$1(SasRecordReader.scala:119)
at com.github.saurfang.sas.mapred.SasRecordReader.readNext$1(SasRecordReader.scala:118)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:131)
at com.github.saurfang.sas.mapred.SasRecordReader.next(SasRecordReader.scala:19)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.ggasoftware.parso.SasFileParser.readNext(SasFileParser.java:876)
... 32 more
17/04/04 10:47:35 INFO SparkContext: Invoking stop() from shutdown hook
17/04/04 10:47:35 INFO ServerConnector: Stopped ServerConnector@4108fa66{HTTP/1.1}{0.0.0.0:4040}
17/04/04 10:47:35 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@27e0f2f5{/stages/stage/kill,null,UNAVAILABLE}
17/04/04 10:47:35 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@9cd25ff{/api,null,UNAVAILABLE}
17/04/04 10:47:35 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@69f63d95{/,null,UNAVAILABLE}
17/04/04 10:47:35 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@660e9100{/static,null,UNAVAILABLE}
17/04/04 10:47:35 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@6928f576{/executors/threadDump/json,null,UNAVAILABLE}
17/04/04 10:47:35 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@182f1e9a{/executors/threadDump,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@758f4f03{/executors/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@61edc883{/executors,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5cc5b667{/environment/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@b5cc23a{/environment,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5398edd0{/storage/rdd/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@3591009c{/storage/rdd,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@4152d38d{/storage/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7555b920{/storage,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@4cc547a{/stages/pool/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7a11c4c7{/stages/pool,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7e094740{/stages/stage/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@64c2b546{/stages/stage,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@578524c3{/stages/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@551a20d6{/stages,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5fe8b721{/jobs/job/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@560cbf1a{/jobs/job,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@740abb5{/jobs/json,null,UNAVAILABLE}
17/04/04 10:47:36 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@56db847e{/jobs,null,UNAVAILABLE}
17/04/04 10:47:36 INFO SparkUI: Stopped Spark web UI at http://10.0.0.17:4040
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.BlockManager.disk.diskSpaceUsed_MB, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.BlockManager.memory.maxMem_MB, value=8744
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.BlockManager.memory.memUsed_MB, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.BlockManager.memory.remainingMem_MB, value=8744
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.DAGScheduler.job.activeJobs, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.DAGScheduler.job.allJobs, value=1
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.DAGScheduler.stage.failedStages, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.DAGScheduler.stage.runningStages, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.DAGScheduler.stage.waitingStages, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.PS-MarkSweep.count, value=3
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.PS-MarkSweep.time, value=195
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.PS-Scavenge.count, value=6
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.PS-Scavenge.time, value=120
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.heap.committed, value=686292992
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.heap.init, value=461373440
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.heap.max, value=954728448
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.heap.usage, value=0.18328408288929504
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.heap.used, value=175308568
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.non-heap.committed, value=104161280
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.non-heap.init, value=2555904
17/04/04 10:47:36 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor 10.0.0.5: org.apache.spark.SparkException (Task failed while writing rows) [duplicate 4]
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.non-heap.max, value=-1
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.non-heap.usage, value=-1.0187864E8
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.non-heap.used, value=101881368
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Code-Cache.committed, value=17170432
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Code-Cache.init, value=2555904
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Code-Cache.max, value=251658240
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Code-Cache.usage, value=0.06387176513671874
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Code-Cache.used, value=16073856
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Compressed-Class-Space.committed, value=10747904
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Compressed-Class-Space.init, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Compressed-Class-Space.max, value=1073741824
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Compressed-Class-Space.usage, value=0.009773895144462585
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Compressed-Class-Space.used, value=10494640
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Metaspace.committed, value=76242944
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Metaspace.init, value=0
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Metaspace.max, value=-1
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Metaspace.usage, value=0.9880171468719781
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.Metaspace.used, value=75329336
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Eden-Space.committed, value=199229440
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Eden-Space.init, value=115867648
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Eden-Space.max, value=259522560
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Eden-Space.usage, value=0.06554482199928978
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Eden-Space.used, value=17010360
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Old-Gen.committed, value=468189184
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Old-Gen.init, value=307757056
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Old-Gen.max, value=716177408
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Old-Gen.usage, value=0.1961920474319123
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Old-Gen.used, value=140508312
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Survivor-Space.committed, value=18874368
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Survivor-Space.init, value=18874368
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Survivor-Space.max, value=18874368
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Survivor-Space.usage, value=0.9998643663194444
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.pools.PS-Survivor-Space.used, value=18871808
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.total.committed, value=790454272
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.total.init, value=463929344
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.total.max, value=954728447
17/04/04 10:47:36 INFO metrics: type=GAUGE, name=application_1491208276448_0097.driver.jvm.total.used, value=279069288
17/04/04 10:47:36 INFO metrics: type=HISTOGRAM, name=application_1491208276448_0097.driver.CodeGenerator.compilationTime, count=1, min=148, max=148, mean=148.0, stddev=0.0, median=148.0, p75=148.0, p95=148.0, p98=148.0, p99=148.0, p999=148.0
17/04/04 10:47:36 INFO metrics: type=HISTOGRAM, name=application_1491208276448_0097.driver.CodeGenerator.generatedClassSize, count=2, min=532, max=2376, mean=1454.0, stddev=922.0, median=2376.0, p75=2376.0, p95=2376.0, p98=2376.0, p99=2376.0, p999=2376.0
17/04/04 10:47:36 INFO metrics: type=HISTOGRAM, name=application_1491208276448_0097.driver.CodeGenerator.generatedMethodSize, count=5, min=5, max=221, mean=64.2, stddev=81.832511876393, median=15.0, p75=70.0, p95=221.0, p98=221.0, p99=221.0, p999=221.0
17/04/04 10:47:36 INFO metrics: type=HISTOGRAM, name=application_1491208276448_0097.driver.CodeGenerator.sourceCodeSize, count=1, min=1962, max=1962, mean=1962.0, stddev=0.0, median=1962.0, p75=1962.0, p95=1962.0, p98=1962.0, p99=1962.0, p999=1962.0
17/04/04 10:47:36 INFO metrics: type=TIMER, name=application_1491208276448_0097.driver.DAGScheduler.messageProcessingTime, count=20, min=0.0247, max=105.177696, mean=9.992431606157076, stddev=27.565178565314206, median=0.38521, p75=1.296533, p95=79.069126, p98=105.177696, p99=105.177696, p999=105.177696, mean_rate=0.4768138149475206, m1=0.21203362042935395, m5=0.045742426543696334, m15=0.01545137723000816, rate_unit=events/second, duration_unit=milliseconds
17/04/04 10:47:36 INFO TaskSetManager: Lost task 1.1 in stage 0.0 (TID 6) on executor 10.0.0.5: org.apache.spark.SparkException (Task failed while writing rows) [duplicate 5]
17/04/04 10:47:36 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerTaskEnd(0,0,ResultTask,ExceptionFailure(org.apache.spark.SparkException,Task failed while writing rows,[Ljava.lang.StackTraceElement;@6e7a81f0,org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.TaskKilledException
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:253)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
,Some(org.apache.spark.ThrowableSerializationWrapper@73d4f240),Vector(AccumulableInfo(3,Some(internal.metrics.executorRunTime),Some(2798),None,true,true,None), AccumulableInfo(4,Some(internal.metrics.resultSize),Some(0),None,true,true,None), AccumulableInfo(5,Some(internal.metrics.jvmGCTime),Some(37),None,true,true,None), AccumulableInfo(20,Some(internal.metrics.input.bytesRead),Some(52690944),None,true,true,None)),Vector(LongAccumulator(id: 3, name: Some(internal.metrics.executorRunTime), value: 2798), LongAccumulator(id: 4, name: Some(internal.metrics.resultSize), value: 0), LongAccumulator(id: 5, name: Some(internal.metrics.jvmGCTime), value: 37), LongAccumulator(id: 20, name: Some(internal.metrics.input.bytesRead), value: 52690944))),org.apache.spark.scheduler.TaskInfo@7ce29ac9,org.apache.spark.executor.TaskMetrics@45ffb6b9)
17/04/04 10:47:36 INFO YarnClientSchedulerBackend: Interrupting monitor thread
17/04/04 10:47:36 INFO YarnClientSchedulerBackend: Shutting down all executors
17/04/04 10:47:36 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
17/04/04 10:47:36 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
17/04/04 10:47:36 INFO YarnClientSchedulerBackend: Stopped
17/04/04 10:47:36 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/04/04 10:47:36 INFO MemoryStore: MemoryStore cleared
17/04/04 10:47:36 INFO BlockManager: BlockManager stopped
17/04/04 10:47:36 INFO BlockManagerMaster: BlockManagerMaster stopped
17/04/04 10:47:36 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/04/04 10:47:36 INFO SparkContext: Successfully stopped SparkContext
17/04/04 10:47:36 INFO ShutdownHookManager: Shutdown hook called
17/04/04 10:47:36 INFO ShutdownHookManager: Deleting directory /tmp/spark-42ea5e7c-0997-4a1e-9ff2-881cff8a04d2
17/04/04 10:47:36 INFO MetricsSystemImpl: Stopping azure-file-system metrics system...
17/04/04 10:47:36 INFO MetricsSinkAdapter: azurefs2 thread interrupted.
17/04/04 10:47:36 INFO MetricsSystemImpl: azure-file-system metrics system stopped.
17/04/04 10:47:36 INFO MetricsSystemImpl: azure-file-system metr

Add simple install instructions

It would be worthwhile to also mention a few steps for pyspark as I want to use it with pyspark, and can't seem to be able to get it running. Read https://github.com/databricks/spark-csv/issues/59#issuecomment-99291210 for various suggestions, of which some are applicable here too

Also, it would be good to provide a method which does not use maven with --packages as some people have proxies which does not allow maven. Hence, a small section on which jar files to download and how to configure pyspark/spark-shell to use these jars would be very helpful !

Willing to migrate license to Apache 2.0?

epam/parso#19

The upstream Parso library is migrating to Apache license.
Can the same be done with saurfang/spark-sas7bdat?
Not sure if Apache and GPL licenses are compatible.
Another benefit, is that with Apache license saurfang/spark-sas7bdat could potentially be included into core Apache Spark.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.