org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1. Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/warehouse-2f263183-c822-4bad-b18e-91640bc3c972' '--conf' 'spark.sql.test.version.index=0' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/warehouse-2f263183-c822-4bad-b18e-91640bc3c972' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/test4079375644124113239.py' 2018-05-03 20:08:25.481 - stderr> SLF4J: Class path contains multiple SLF4J bindings. 2018-05-03 20:08:25.482 - stderr> SLF4J: Found binding in [jar:file:/tmp/test-spark/spark-2.0.2/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] 2018-05-03 20:08:25.482 - stderr> SLF4J: Found binding in [jar:file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] 2018-05-03 20:08:25.482 - stderr> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 2018-05-03 20:08:25.482 - stderr> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2018-05-03 20:08:25.943 - stdout> 20:08:25.943 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-05-03 20:08:29.315 - stdout> 20:08:29.315 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/tmp/test-spark/spark-2.0.2/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-3.2.9.jar." 2018-05-03 20:08:29.346 - stdout> 20:08:29.346 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/tmp/test-spark/spark-2.0.2/jars/datanucleus-core-3.2.10.jar." 2018-05-03 20:08:29.356 - stdout> 20:08:29.356 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/tmp/test-spark/spark-2.0.2/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-3.2.6.jar." 2018-05-03 20:09:06.517 - stdout> 20:09:06.516 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 2018-05-03 20:09:06.863 - stdout> 20:09:06.863 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException 2018-05-03 20:09:13.27 - stdout> 20:09:13.270 WARN org.apache.spark.sql.execution.command.CreateDataSourceTableUtils: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source relation `data_source_tbl_0` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive. 2018-05-03 20:09:13.459 - stderr> java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@76c53f57 2018-05-03 20:09:13.459 - stderr> at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.base.BaseDateTime.<init>(BaseDateTime.java:198) 2018-05-03 20:09:13.459 - stderr> at org.joda.time.DateTime.<init>(DateTime.java:476) 2018-05-03 20:09:13.459 - stderr> at org.apache.hive.common.util.TimestampParser.<clinit>(TimestampParser.java:49) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyTimestampObjectInspector.<init>(LazyTimestampObjectInspector.java:38) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.<clinit>(LazyPrimitiveObjectInspectorFactory.java:72) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:324) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:336) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyStructInspector(LazyFactory.java:431) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:128) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:53) 2018-05-03 20:09:13.459 - stderr> at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521) 2018-05-03 20:09:13.46 - stderr> at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391) 2018-05-03 20:09:13.46 - stderr> at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276) 2018-05-03 20:09:13.46 - stderr> at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:197) 2018-05-03 20:09:13.46 - stderr> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:698) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply$mcV$sp(HiveClientImpl.scala:426) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:426) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:426) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:280) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:269) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:425) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:188) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:152) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:152) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:72) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:152) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:226) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.execution.command.CreateDataSourceTableUtils$.createDataSourceTable(createDataSourceTables.scala:504) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:259) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 2018-05-03 20:09:13.46 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65) 2018-05-03 20:09:13.461 - stderr> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582) 2018-05-03 20:09:13.461 - stderr> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2018-05-03 20:09:13.461 - stderr> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2018-05-03 20:09:13.461 - stderr> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2018-05-03 20:09:13.461 - stderr> at java.lang.reflect.Method.invoke(Method.java:497) 2018-05-03 20:09:13.461 - stderr> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) 2018-05-03 20:09:13.461 - stderr> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 2018-05-03 20:09:13.461 - stderr> at py4j.Gateway.invoke(Gateway.java:280) 2018-05-03 20:09:13.461 - stderr> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 2018-05-03 20:09:13.461 - stderr> at py4j.commands.CallCommand.execute(CallCommand.java:79) 2018-05-03 20:09:13.461 - stderr> at py4j.GatewayConnection.run(GatewayConnection.java:214) 2018-05-03 20:09:13.462 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.351 - stderr> java.io.FileNotFoundException: /tmp/test-spark/spark-2.0.2/jars/snappy-java-1.1.2.6.jar (No such file or directory) 2018-05-03 20:09:14.352 - stderr> java.lang.NullPointerException 2018-05-03 20:09:14.353 - stderr> at org.xerial.snappy.SnappyLoader.extractLibraryFile(SnappyLoader.java:232) 2018-05-03 20:09:14.353 - stderr> at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:344) 2018-05-03 20:09:14.353 - stderr> at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171) 2018-05-03 20:09:14.353 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152) 2018-05-03 20:09:14.353 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-05-03 20:09:14.353 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-05-03 20:09:14.353 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.353 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.353 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.353 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.353 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.353 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.355 - stdout> 20:09:14.354 ERROR org.apache.spark.util.Utils: Aborting task 2018-05-03 20:09:14.355 - stdout> java.lang.RuntimeException: Failed to commit task 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.355 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.355 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.355 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.355 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-05-03 20:09:14.355 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-05-03 20:09:14.355 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-05-03 20:09:14.355 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-05-03 20:09:14.355 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.355 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.355 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267) 2018-05-03 20:09:14.355 - stdout> ... 13 more 2018-05-03 20:09:14.357 - stdout> 20:09:14.357 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Task attempt attempt_201805032009_0001_m_000000_0 aborted. 2018-05-03 20:09:14.357 - stdout> 20:09:14.357 WARN org.apache.spark.util.Utils: Suppressing exception in catch: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-05-03 20:09:14.357 - stdout> java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-05-03 20:09:14.357 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146) 2018-05-03 20:09:14.357 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138) 2018-05-03 20:09:14.358 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195) 2018-05-03 20:09:14.358 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153) 2018-05-03 20:09:14.358 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.358 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.358 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.358 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.358 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.358 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.36 - stdout> 20:09:14.359 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1.0 (TID 1) 2018-05-03 20:09:14.36 - stdout> org.apache.spark.SparkException: Task failed while writing rows 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.36 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.36 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.36 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.36 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.36 - stdout> ... 8 more 2018-05-03 20:09:14.36 - stdout> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-05-03 20:09:14.36 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146) 2018-05-03 20:09:14.36 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138) 2018-05-03 20:09:14.36 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195) 2018-05-03 20:09:14.36 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153) 2018-05-03 20:09:14.36 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.36 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258) 2018-05-03 20:09:14.36 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357) 2018-05-03 20:09:14.36 - stdout> ... 9 more 2018-05-03 20:09:14.36 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-05-03 20:09:14.36 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-05-03 20:09:14.361 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-05-03 20:09:14.361 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-05-03 20:09:14.361 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.361 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.361 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.361 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267) 2018-05-03 20:09:14.361 - stdout> ... 13 more 2018-05-03 20:09:14.385 - stdout> 20:09:14.385 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.SparkException: Task failed while writing rows 2018-05-03 20:09:14.385 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261) 2018-05-03 20:09:14.385 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.385 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.386 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.386 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.386 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.386 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.386 - stdout> ... 8 more 2018-05-03 20:09:14.386 - stdout> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357) 2018-05-03 20:09:14.386 - stdout> ... 9 more 2018-05-03 20:09:14.386 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-05-03 20:09:14.386 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-05-03 20:09:14.386 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-05-03 20:09:14.386 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-05-03 20:09:14.386 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.386 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.386 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267) 2018-05-03 20:09:14.386 - stdout> ... 13 more 2018-05-03 20:09:14.386 - stdout> 2018-05-03 20:09:14.388 - stdout> 20:09:14.388 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job 2018-05-03 20:09:14.4 - stdout> 20:09:14.398 ERROR org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand: Aborting job. 2018-05-03 20:09:14.4 - stdout> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.SparkException: Task failed while writing rows 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.4 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.4 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.4 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.4 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.4 - stdout> ... 8 more 2018-05-03 20:09:14.4 - stdout> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258) 2018-05-03 20:09:14.4 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357) 2018-05-03 20:09:14.4 - stdout> ... 9 more 2018-05-03 20:09:14.4 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-05-03 20:09:14.4 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-05-03 20:09:14.4 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-05-03 20:09:14.4 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-05-03 20:09:14.4 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.4 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267) 2018-05-03 20:09:14.401 - stdout> ... 13 more 2018-05-03 20:09:14.401 - stdout> 2018-05-03 20:09:14.401 - stdout> Driver stacktrace: 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441) 2018-05-03 20:09:14.401 - stdout> at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 2018-05-03 20:09:14.401 - stdout> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) 2018-05-03 20:09:14.401 - stdout> at scala.Option.foreach(Option.scala:257) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1906) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:525) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:249) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 2018-05-03 20:09:14.401 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582) 2018-05-03 20:09:14.402 - stdout> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2018-05-03 20:09:14.402 - stdout> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2018-05-03 20:09:14.402 - stdout> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2018-05-03 20:09:14.402 - stdout> at java.lang.reflect.Method.invoke(Method.java:497) 2018-05-03 20:09:14.402 - stdout> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) 2018-05-03 20:09:14.402 - stdout> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 2018-05-03 20:09:14.402 - stdout> at py4j.Gateway.invoke(Gateway.java:280) 2018-05-03 20:09:14.402 - stdout> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 2018-05-03 20:09:14.402 - stdout> at py4j.commands.CallCommand.execute(CallCommand.java:79) 2018-05-03 20:09:14.402 - stdout> at py4j.GatewayConnection.run(GatewayConnection.java:214) 2018-05-03 20:09:14.402 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.402 - stdout> Caused by: org.apache.spark.SparkException: Task failed while writing rows 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.402 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.402 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.402 - stdout> ... 1 more 2018-05-03 20:09:14.402 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.402 - stdout> ... 8 more 2018-05-03 20:09:14.402 - stdout> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258) 2018-05-03 20:09:14.402 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357) 2018-05-03 20:09:14.402 - stdout> ... 9 more 2018-05-03 20:09:14.402 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-05-03 20:09:14.402 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-05-03 20:09:14.402 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-05-03 20:09:14.402 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-05-03 20:09:14.402 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-05-03 20:09:14.402 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-05-03 20:09:14.403 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-05-03 20:09:14.403 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154) 2018-05-03 20:09:14.403 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-05-03 20:09:14.403 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-05-03 20:09:14.403 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569) 2018-05-03 20:09:14.403 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267) 2018-05-03 20:09:14.403 - stdout> ... 13 more 2018-05-03 20:09:14.403 - stdout> 20:09:14.403 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Job job_201805032009_0000 aborted. 2018-05-03 20:09:14.426 - stdout> Traceback (most recent call last): 2018-05-03 20:09:14.427 - stdout> File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/test4079375644124113239.py", line 10, in <module> 2018-05-03 20:09:14.427 - stdout> spark.sql("create table hive_compatible_data_source_tbl_{} using parquet as select 1 i".format(version_index)) 2018-05-03 20:09:14.427 - stdout> File "/tmp/test-spark/spark-2.0.2/python/lib/pyspark.zip/pyspark/sql/session.py", line 543, in sql 2018-05-03 20:09:14.427 - stdout> File "/tmp/test-spark/spark-2.0.2/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1133, in __call__ 2018-05-03 20:09:14.427 - stdout> File "/tmp/test-spark/spark-2.0.2/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco 2018-05-03 20:09:14.427 - stdout> File "/tmp/test-spark/spark-2.0.2/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return_value 2018-05-03 20:09:14.429 - stdout> py4j.protocol.Py4JJavaError: An error occurred while calling o28.sql. 2018-05-03 20:09:14.429 - stdout> : org.apache.spark.SparkException: Job aborted. 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:149) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:525) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:249) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582) 2018-05-03 20:09:14.429 - stdout> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2018-05-03 20:09:14.429 - stdout> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2018-05-03 20:09:14.429 - stdout> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2018-05-03 20:09:14.429 - stdout> at java.lang.reflect.Method.invoke(Method.java:497) 2018-05-03 20:09:14.429 - stdout> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) 2018-05-03 20:09:14.429 - stdout> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 2018-05-03 20:09:14.429 - stdout> at py4j.Gateway.invoke(Gateway.java:280) 2018-05-03 20:09:14.429 - stdout> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 2018-05-03 20:09:14.429 - stdout> at py4j.commands.CallCommand.execute(CallCommand.java:79) 2018-05-03 20:09:14.429 - stdout> at py4j.GatewayConnection.run(GatewayConnection.java:214) 2018-05-03 20:09:14.429 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.429 - stdout> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.SparkException: Task failed while writing rows 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:86) 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 2018-05-03 20:09:14.429 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-05-03 20:09:14.429 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-05-03 20:09:14.429 - stdout> at java.lang.Thread.run(Thread.java:745) 2018-05-03 20:09:14.429 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task 2018-05-03 20:09:14.429 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275) 2018-05-03 20:09:14.43 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257) 2018-05-03 20:09:14.43 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.43 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252) 2018-05-03 20:09:14.43 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348) 2018-05-03 20:09:14.43 - stdout> at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258) 2018-05-03 20:09:14.43 - stdout> ... 8 more 2018-05-03 20:09:14.43 - stdout> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-05-03 20:09:14.43 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146) 2018-05-03 20:09:14.43 - stdout> at org.apache.parquet.hadoop.ParquetFileWriter$S

sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1.
Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/warehouse-2f263183-c822-4bad-b18e-91640bc3c972' '--conf' 'spark.sql.test.version.index=0' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/warehouse-2f263183-c822-4bad-b18e-91640bc3c972' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/test4079375644124113239.py'

2018-05-03 20:08:25.481 - stderr> SLF4J: Class path contains multiple SLF4J bindings.
2018-05-03 20:08:25.482 - stderr> SLF4J: Found binding in [jar:file:/tmp/test-spark/spark-2.0.2/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2018-05-03 20:08:25.482 - stderr> SLF4J: Found binding in [jar:file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2018-05-03 20:08:25.482 - stderr> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2018-05-03 20:08:25.482 - stderr> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018-05-03 20:08:25.943 - stdout> 20:08:25.943 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-05-03 20:08:29.315 - stdout> 20:08:29.315 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/tmp/test-spark/spark-2.0.2/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-3.2.9.jar."
2018-05-03 20:08:29.346 - stdout> 20:08:29.346 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/tmp/test-spark/spark-2.0.2/jars/datanucleus-core-3.2.10.jar."
2018-05-03 20:08:29.356 - stdout> 20:08:29.356 WARN DataNucleus.General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/tmp/test-spark/spark-2.0.2/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/home/sparkivy/per-executor-caches/8/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-3.2.6.jar."
2018-05-03 20:09:06.517 - stdout> 20:09:06.516 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
2018-05-03 20:09:06.863 - stdout> 20:09:06.863 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
2018-05-03 20:09:13.27 - stdout> 20:09:13.270 WARN org.apache.spark.sql.execution.command.CreateDataSourceTableUtils: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source relation `data_source_tbl_0` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
2018-05-03 20:09:13.459 - stderr> java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@76c53f57
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.base.BaseDateTime.<init>(BaseDateTime.java:198)
2018-05-03 20:09:13.459 - stderr> 	at org.joda.time.DateTime.<init>(DateTime.java:476)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hive.common.util.TimestampParser.<clinit>(TimestampParser.java:49)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyTimestampObjectInspector.<init>(LazyTimestampObjectInspector.java:38)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.<clinit>(LazyPrimitiveObjectInspectorFactory.java:72)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:324)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:336)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyStructInspector(LazyFactory.java:431)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:128)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:53)
2018-05-03 20:09:13.459 - stderr> 	at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:197)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:698)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply$mcV$sp(HiveClientImpl.scala:426)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:426)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:426)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:280)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:269)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:425)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:188)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:152)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:152)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:72)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:152)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:226)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableUtils$.createDataSourceTable(createDataSourceTables.scala:504)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:259)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
2018-05-03 20:09:13.46 - stderr> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
2018-05-03 20:09:13.461 - stderr> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
2018-05-03 20:09:13.461 - stderr> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018-05-03 20:09:13.461 - stderr> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018-05-03 20:09:13.461 - stderr> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018-05-03 20:09:13.461 - stderr> 	at java.lang.reflect.Method.invoke(Method.java:497)
2018-05-03 20:09:13.461 - stderr> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
2018-05-03 20:09:13.461 - stderr> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
2018-05-03 20:09:13.461 - stderr> 	at py4j.Gateway.invoke(Gateway.java:280)
2018-05-03 20:09:13.461 - stderr> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
2018-05-03 20:09:13.461 - stderr> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
2018-05-03 20:09:13.461 - stderr> 	at py4j.GatewayConnection.run(GatewayConnection.java:214)
2018-05-03 20:09:13.462 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.351 - stderr> java.io.FileNotFoundException: /tmp/test-spark/spark-2.0.2/jars/snappy-java-1.1.2.6.jar (No such file or directory)
2018-05-03 20:09:14.352 - stderr> java.lang.NullPointerException
2018-05-03 20:09:14.353 - stderr> 	at org.xerial.snappy.SnappyLoader.extractLibraryFile(SnappyLoader.java:232)
2018-05-03 20:09:14.353 - stderr> 	at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:344)
2018-05-03 20:09:14.353 - stderr> 	at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
2018-05-03 20:09:14.353 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
2018-05-03 20:09:14.353 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.353 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.353 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.353 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.353 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.355 - stdout> 20:09:14.354 ERROR org.apache.spark.util.Utils: Aborting task
2018-05-03 20:09:14.355 - stdout> java.lang.RuntimeException: Failed to commit task
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.355 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.355 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.355 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.355 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-05-03 20:09:14.355 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-05-03 20:09:14.355 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.355 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
2018-05-03 20:09:14.355 - stdout> 	... 13 more
2018-05-03 20:09:14.357 - stdout> 20:09:14.357 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Task attempt attempt_201805032009_0001_m_000000_0 aborted.
2018-05-03 20:09:14.357 - stdout> 20:09:14.357 WARN org.apache.spark.util.Utils: Suppressing exception in catch: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-05-03 20:09:14.357 - stdout> java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-05-03 20:09:14.357 - stdout> 	at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146)
2018-05-03 20:09:14.357 - stdout> 	at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.358 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.358 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.358 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.358 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.36 - stdout> 20:09:14.359 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 1.0 (TID 1)
2018-05-03 20:09:14.36 - stdout> org.apache.spark.SparkException: Task failed while writing rows
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.36 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.36 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.36 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.36 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
2018-05-03 20:09:14.36 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.36 - stdout> 	... 8 more
2018-05-03 20:09:14.36 - stdout> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-05-03 20:09:14.36 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
2018-05-03 20:09:14.36 - stdout> 		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357)
2018-05-03 20:09:14.36 - stdout> 		... 9 more
2018-05-03 20:09:14.36 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-05-03 20:09:14.36 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-05-03 20:09:14.361 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.361 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
2018-05-03 20:09:14.361 - stdout> 	... 13 more
2018-05-03 20:09:14.385 - stdout> 20:09:14.385 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.SparkException: Task failed while writing rows
2018-05-03 20:09:14.385 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
2018-05-03 20:09:14.385 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.385 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.386 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.386 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.386 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.386 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.386 - stdout> 	... 8 more
2018-05-03 20:09:14.386 - stdout> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-05-03 20:09:14.386 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
2018-05-03 20:09:14.386 - stdout> 		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357)
2018-05-03 20:09:14.386 - stdout> 		... 9 more
2018-05-03 20:09:14.386 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-05-03 20:09:14.386 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-05-03 20:09:14.386 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.386 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
2018-05-03 20:09:14.386 - stdout> 	... 13 more
2018-05-03 20:09:14.386 - stdout> 
2018-05-03 20:09:14.388 - stdout> 20:09:14.388 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
2018-05-03 20:09:14.4 - stdout> 20:09:14.398 ERROR org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand: Aborting job.
2018-05-03 20:09:14.4 - stdout> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.SparkException: Task failed while writing rows
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.4 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.4 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.4 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.4 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.4 - stdout> 	... 8 more
2018-05-03 20:09:14.4 - stdout> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-05-03 20:09:14.4 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
2018-05-03 20:09:14.4 - stdout> 		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357)
2018-05-03 20:09:14.4 - stdout> 		... 9 more
2018-05-03 20:09:14.4 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-05-03 20:09:14.4 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-05-03 20:09:14.4 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.4 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
2018-05-03 20:09:14.401 - stdout> 	... 13 more
2018-05-03 20:09:14.401 - stdout> 
2018-05-03 20:09:14.401 - stdout> Driver stacktrace:
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
2018-05-03 20:09:14.401 - stdout> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
2018-05-03 20:09:14.401 - stdout> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
2018-05-03 20:09:14.401 - stdout> 	at scala.Option.foreach(Option.scala:257)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1906)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:525)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:249)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
2018-05-03 20:09:14.401 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
2018-05-03 20:09:14.402 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018-05-03 20:09:14.402 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018-05-03 20:09:14.402 - stdout> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018-05-03 20:09:14.402 - stdout> 	at java.lang.reflect.Method.invoke(Method.java:497)
2018-05-03 20:09:14.402 - stdout> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
2018-05-03 20:09:14.402 - stdout> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
2018-05-03 20:09:14.402 - stdout> 	at py4j.Gateway.invoke(Gateway.java:280)
2018-05-03 20:09:14.402 - stdout> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
2018-05-03 20:09:14.402 - stdout> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
2018-05-03 20:09:14.402 - stdout> 	at py4j.GatewayConnection.run(GatewayConnection.java:214)
2018-05-03 20:09:14.402 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.402 - stdout> Caused by: org.apache.spark.SparkException: Task failed while writing rows
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.402 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.402 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.402 - stdout> 	... 1 more
2018-05-03 20:09:14.402 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.402 - stdout> 	... 8 more
2018-05-03 20:09:14.402 - stdout> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-05-03 20:09:14.402 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:138)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:195)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:153)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
2018-05-03 20:09:14.402 - stdout> 		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1357)
2018-05-03 20:09:14.402 - stdout> 		... 9 more
2018-05-03 20:09:14.402 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-05-03 20:09:14.402 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-05-03 20:09:14.402 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-05-03 20:09:14.402 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-05-03 20:09:14.403 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-05-03 20:09:14.403 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:154)
2018-05-03 20:09:14.403 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-05-03 20:09:14.403 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-05-03 20:09:14.403 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
2018-05-03 20:09:14.403 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
2018-05-03 20:09:14.403 - stdout> 	... 13 more
2018-05-03 20:09:14.403 - stdout> 20:09:14.403 ERROR org.apache.spark.sql.execution.datasources.DefaultWriterContainer: Job job_201805032009_0000 aborted.
2018-05-03 20:09:14.426 - stdout> Traceback (most recent call last):
2018-05-03 20:09:14.427 - stdout>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6/target/tmp/test4079375644124113239.py", line 10, in <module>
2018-05-03 20:09:14.427 - stdout>     spark.sql("create table hive_compatible_data_source_tbl_{} using parquet as select 1 i".format(version_index))
2018-05-03 20:09:14.427 - stdout>   File "/tmp/test-spark/spark-2.0.2/python/lib/pyspark.zip/pyspark/sql/session.py", line 543, in sql
2018-05-03 20:09:14.427 - stdout>   File "/tmp/test-spark/spark-2.0.2/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1133, in __call__
2018-05-03 20:09:14.427 - stdout>   File "/tmp/test-spark/spark-2.0.2/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
2018-05-03 20:09:14.427 - stdout>   File "/tmp/test-spark/spark-2.0.2/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return_value
2018-05-03 20:09:14.429 - stdout> py4j.protocol.Py4JJavaError: An error occurred while calling o28.sql.
2018-05-03 20:09:14.429 - stdout> : org.apache.spark.SparkException: Job aborted.
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:149)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:525)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:249)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
2018-05-03 20:09:14.429 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018-05-03 20:09:14.429 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018-05-03 20:09:14.429 - stdout> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018-05-03 20:09:14.429 - stdout> 	at java.lang.reflect.Method.invoke(Method.java:497)
2018-05-03 20:09:14.429 - stdout> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
2018-05-03 20:09:14.429 - stdout> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
2018-05-03 20:09:14.429 - stdout> 	at py4j.Gateway.invoke(Gateway.java:280)
2018-05-03 20:09:14.429 - stdout> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
2018-05-03 20:09:14.429 - stdout> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
2018-05-03 20:09:14.429 - stdout> 	at py4j.GatewayConnection.run(GatewayConnection.java:214)
2018-05-03 20:09:14.429 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.429 - stdout> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.apache.spark.SparkException: Task failed while writing rows
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:86)
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
2018-05-03 20:09:14.429 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-05-03 20:09:14.429 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-05-03 20:09:14.429 - stdout> 	at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:09:14.429 - stdout> Caused by: java.lang.RuntimeException: Failed to commit task
2018-05-03 20:09:14.429 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
2018-05-03 20:09:14.43 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
2018-05-03 20:09:14.43 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.43 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
2018-05-03 20:09:14.43 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
2018-05-03 20:09:14.43 - stdout> 	at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
2018-05-03 20:09:14.43 - stdout> 	... 8 more
2018-05-03 20:09:14.43 - stdout> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-05-03 20:09:14.43 - stdout> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:146)
2018-05-03 20:09:14.43 - stdout> 		at org.apache.parquet.ha