org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1. Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303' '--conf' 'spark.sql.test.version.index=0' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py' 2018-10-16 18:56:41.949 - stderr> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 2018-10-16 18:56:41.956 - stderr> 18/10/16 18:56:41 INFO SparkContext: Running Spark version 2.1.3 2018-10-16 18:56:42.224 - stderr> 18/10/16 18:56:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-10-16 18:56:42.347 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing view acls to: jenkins 2018-10-16 18:56:42.347 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing modify acls to: jenkins 2018-10-16 18:56:42.348 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing view acls groups to: 2018-10-16 18:56:42.348 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing modify acls groups to: 2018-10-16 18:56:42.349 - stderr> 18/10/16 18:56:42 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jenkins); groups with view permissions: Set(); users with modify permissions: Set(jenkins); groups with modify permissions: Set() 2018-10-16 18:56:42.659 - stderr> 18/10/16 18:56:42 INFO Utils: Successfully started service 'sparkDriver' on port 43918. 2018-10-16 18:56:42.728 - stderr> 18/10/16 18:56:42 INFO SparkEnv: Registering MapOutputTracker 2018-10-16 18:56:42.754 - stderr> 18/10/16 18:56:42 INFO SparkEnv: Registering BlockManagerMaster 2018-10-16 18:56:42.758 - stderr> 18/10/16 18:56:42 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 2018-10-16 18:56:42.759 - stderr> 18/10/16 18:56:42 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 2018-10-16 18:56:42.775 - stderr> 18/10/16 18:56:42 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-96c64ee9-a4c4-476b-af29-c18e5bc122c0 2018-10-16 18:56:42.799 - stderr> 18/10/16 18:56:42 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 2018-10-16 18:56:42.866 - stderr> 18/10/16 18:56:42 INFO SparkEnv: Registering OutputCommitCoordinator 2018-10-16 18:56:43.086 - stderr> 18/10/16 18:56:43 INFO SparkContext: Added file file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py at file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py with timestamp 1539741403085 2018-10-16 18:56:43.088 - stderr> 18/10/16 18:56:43 INFO Utils: Copying /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py to /tmp/spark-5f33797a-9d38-4e40-9c2c-b5ecfdd0848c/userFiles-fe3ffd46-aef0-4c46-acce-49e17f01cdc3/test3079620590171554897.py 2018-10-16 18:56:43.149 - stderr> 18/10/16 18:56:43 INFO Executor: Starting executor ID driver on host localhost 2018-10-16 18:56:43.165 - stderr> 18/10/16 18:56:43 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38843. 2018-10-16 18:56:43.166 - stderr> 18/10/16 18:56:43 INFO NettyBlockTransferService: Server created on 192.168.10.26:38843 2018-10-16 18:56:43.167 - stderr> 18/10/16 18:56:43 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 2018-10-16 18:56:43.169 - stderr> 18/10/16 18:56:43 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.10.26, 38843, None) 2018-10-16 18:56:43.171 - stderr> 18/10/16 18:56:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.10.26:38843 with 366.3 MB RAM, BlockManagerId(driver, 192.168.10.26, 38843, None) 2018-10-16 18:56:43.174 - stderr> 18/10/16 18:56:43 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.10.26, 38843, None) 2018-10-16 18:56:43.175 - stderr> 18/10/16 18:56:43 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.10.26, 38843, None) 2018-10-16 18:56:43.445 - stderr> 18/10/16 18:56:43 INFO SharedState: Warehouse path is '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303'. 2018-10-16 18:56:43.54 - stderr> 18/10/16 18:56:43 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. 2018-10-16 18:56:44.123 - stderr> 18/10/16 18:56:44 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2018-10-16 18:56:44.144 - stderr> 18/10/16 18:56:44 INFO ObjectStore: ObjectStore, initialize called 2018-10-16 18:56:44.265 - stderr> 18/10/16 18:56:44 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 2018-10-16 18:56:44.266 - stderr> 18/10/16 18:56:44 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored 2018-10-16 18:56:56.968 - stderr> 18/10/16 18:56:56 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2018-10-16 18:56:58.225 - stderr> 18/10/16 18:56:58 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-10-16 18:56:58.226 - stderr> 18/10/16 18:56:58 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-10-16 18:57:03.011 - stderr> 18/10/16 18:57:03 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-10-16 18:57:03.012 - stderr> 18/10/16 18:57:03 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-10-16 18:57:04.405 - stderr> 18/10/16 18:57:04 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY 2018-10-16 18:57:04.407 - stderr> 18/10/16 18:57:04 INFO ObjectStore: Initialized ObjectStore 2018-10-16 18:57:04.669 - stderr> 18/10/16 18:57:04 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 2018-10-16 18:57:05.033 - stderr> 18/10/16 18:57:05 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException 2018-10-16 18:57:05.405 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: Added admin role in metastore 2018-10-16 18:57:05.413 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: Added public role in metastore 2018-10-16 18:57:05.78 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: No user is added in admin role, since config is empty 2018-10-16 18:57:05.877 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: 0: get_all_databases 2018-10-16 18:57:05.878 - stderr> 18/10/16 18:57:05 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_all_databases 2018-10-16 18:57:05.893 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: 0: get_functions: db=default pat=* 2018-10-16 18:57:05.893 - stderr> 18/10/16 18:57:05 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_functions: db=default pat=* 2018-10-16 18:57:05.894 - stderr> 18/10/16 18:57:05 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. 2018-10-16 18:57:07.007 - stderr> 18/10/16 18:57:07 INFO SessionState: Created local directory: /tmp/593afbbf-9352-45fc-8734-37eae36b8893_resources 2018-10-16 18:57:07.009 - stderr> 18/10/16 18:57:07 INFO SessionState: Created HDFS directory: /tmp/hive/jenkins/593afbbf-9352-45fc-8734-37eae36b8893 2018-10-16 18:57:07.012 - stderr> 18/10/16 18:57:07 INFO SessionState: Created local directory: /tmp/jenkins/593afbbf-9352-45fc-8734-37eae36b8893 2018-10-16 18:57:07.016 - stderr> 18/10/16 18:57:07 INFO SessionState: Created HDFS directory: /tmp/hive/jenkins/593afbbf-9352-45fc-8734-37eae36b8893/_tmp_space.db 2018-10-16 18:57:07.018 - stderr> 18/10/16 18:57:07 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303 2018-10-16 18:57:07.026 - stderr> 18/10/16 18:57:07 INFO HiveMetaStore: 0: get_database: default 2018-10-16 18:57:07.027 - stderr> 18/10/16 18:57:07 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-10-16 18:57:07.048 - stderr> 18/10/16 18:57:07 INFO HiveMetaStore: 0: get_database: global_temp 2018-10-16 18:57:07.048 - stderr> 18/10/16 18:57:07 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: global_temp 2018-10-16 18:57:07.049 - stderr> 18/10/16 18:57:07 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException 2018-10-16 18:57:07.117 - stderr> 18/10/16 18:57:07 INFO SparkSqlParser: Parsing command: create table data_source_tbl_0 using json as select 1 i 2018-10-16 18:57:09.008 - stderr> 18/10/16 18:57:09 INFO HiveMetaStore: 0: get_table : db=default tbl=data_source_tbl_0 2018-10-16 18:57:09.008 - stderr> 18/10/16 18:57:09 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_table : db=default tbl=data_source_tbl_0 2018-10-16 18:57:09.063 - stderr> 18/10/16 18:57:09 INFO HiveMetaStore: 0: get_database: default 2018-10-16 18:57:09.063 - stderr> 18/10/16 18:57:09 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-10-16 18:57:09.066 - stderr> 18/10/16 18:57:09 INFO HiveMetaStore: 0: get_database: default 2018-10-16 18:57:09.066 - stderr> 18/10/16 18:57:09 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-10-16 18:57:09.225 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id 2018-10-16 18:57:09.225 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id 2018-10-16 18:57:09.225 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 2018-10-16 18:57:09.226 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap 2018-10-16 18:57:09.226 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition 2018-10-16 18:57:09.228 - stderr> 18/10/16 18:57:09 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 2018-10-16 18:57:09.229 - stderr> 18/10/16 18:57:09 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-10-16 18:57:09.493 - stderr> 18/10/16 18:57:09 INFO CodeGenerator: Code generated in 226.972236 ms 2018-10-16 18:57:09.637 - stderr> 18/10/16 18:57:09 INFO SparkContext: Starting job: sql at NativeMethodAccessorImpl.java:0 2018-10-16 18:57:09.656 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Got job 0 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions 2018-10-16 18:57:09.656 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Final stage: ResultStage 0 (sql at NativeMethodAccessorImpl.java:0) 2018-10-16 18:57:09.657 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Parents of final stage: List() 2018-10-16 18:57:09.658 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Missing parents: List() 2018-10-16 18:57:09.663 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents 2018-10-16 18:57:09.775 - stderr> 18/10/16 18:57:09 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 80.7 KB, free 366.2 MB) 2018-10-16 18:57:09.802 - stderr> 18/10/16 18:57:09 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 31.1 KB, free 366.2 MB) 2018-10-16 18:57:09.804 - stderr> 18/10/16 18:57:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.10.26:38843 (size: 31.1 KB, free: 366.3 MB) 2018-10-16 18:57:09.806 - stderr> 18/10/16 18:57:09 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1005 2018-10-16 18:57:09.81 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0) 2018-10-16 18:57:09.811 - stderr> 18/10/16 18:57:09 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks 2018-10-16 18:57:09.858 - stderr> 18/10/16 18:57:09 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 6314 bytes) 2018-10-16 18:57:09.87 - stderr> 18/10/16 18:57:09 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 2018-10-16 18:57:09.902 - stderr> 18/10/16 18:57:09 INFO Executor: Fetching file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py with timestamp 1539741403085 2018-10-16 18:57:09.933 - stderr> 18/10/16 18:57:09 INFO Utils: /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py has been previously copied to /tmp/spark-5f33797a-9d38-4e40-9c2c-b5ecfdd0848c/userFiles-fe3ffd46-aef0-4c46-acce-49e17f01cdc3/test3079620590171554897.py 2018-10-16 18:57:10.014 - stderr> 18/10/16 18:57:10 INFO CodeGenerator: Code generated in 8.987251 ms 2018-10-16 18:57:10.018 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 2018-10-16 18:57:10.019 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-10-16 18:57:10.067 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: Saved output of task 'attempt_20181016185710_0000_m_000000_0' to file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303/data_source_tbl_0/_temporary/0/task_20181016185710_0000_m_000000 2018-10-16 18:57:10.068 - stderr> 18/10/16 18:57:10 INFO SparkHadoopMapRedUtil: attempt_20181016185710_0000_m_000000_0: Committed 2018-10-16 18:57:10.082 - stderr> 18/10/16 18:57:10 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1626 bytes result sent to driver 2018-10-16 18:57:10.095 - stderr> 18/10/16 18:57:10 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 257 ms on localhost (executor driver) (1/1) 2018-10-16 18:57:10.097 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 2018-10-16 18:57:10.099 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: ResultStage 0 (sql at NativeMethodAccessorImpl.java:0) finished in 0.278 s 2018-10-16 18:57:10.105 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Job 0 finished: sql at NativeMethodAccessorImpl.java:0, took 0.467732 s 2018-10-16 18:57:10.131 - stderr> 18/10/16 18:57:10 INFO FileFormatWriter: Job null committed. 2018-10-16 18:57:10.171 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default 2018-10-16 18:57:10.171 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-10-16 18:57:10.174 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default 2018-10-16 18:57:10.174 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-10-16 18:57:10.176 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_table : db=default tbl=data_source_tbl_0 2018-10-16 18:57:10.176 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_table : db=default tbl=data_source_tbl_0 2018-10-16 18:57:10.204 - stderr> 18/10/16 18:57:10 WARN HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`data_source_tbl_0` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive. 2018-10-16 18:57:10.331 - stderr> java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@1869bb21 2018-10-16 18:57:10.331 - stderr> at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210) 2018-10-16 18:57:10.331 - stderr> at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127) 2018-10-16 18:57:10.331 - stderr> at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86) 2018-10-16 18:57:10.331 - stderr> at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514) 2018-10-16 18:57:10.331 - stderr> at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413) 2018-10-16 18:57:10.331 - stderr> at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216) 2018-10-16 18:57:10.332 - stderr> at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151) 2018-10-16 18:57:10.332 - stderr> at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79) 2018-10-16 18:57:10.332 - stderr> at org.joda.time.base.BaseDateTime.<init>(BaseDateTime.java:198) 2018-10-16 18:57:10.332 - stderr> at org.joda.time.DateTime.<init>(DateTime.java:476) 2018-10-16 18:57:10.332 - stderr> at org.apache.hive.common.util.TimestampParser.<clinit>(TimestampParser.java:49) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyTimestampObjectInspector.<init>(LazyTimestampObjectInspector.java:38) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.<clinit>(LazyPrimitiveObjectInspectorFactory.java:72) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:324) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:336) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyStructInspector(LazyFactory.java:431) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:128) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:53) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:197) 2018-10-16 18:57:10.332 - stderr> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:698) 2018-10-16 18:57:10.332 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply$mcV$sp(HiveClientImpl.scala:425) 2018-10-16 18:57:10.332 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:425) 2018-10-16 18:57:10.332 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:425) 2018-10-16 18:57:10.332 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:279) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:226) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:225) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:268) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:424) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.saveTableIntoHive(HiveExternalCatalog.scala:455) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$createDataSourceTable(HiveExternalCatalog.scala:364) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:239) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:199) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:199) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:199) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:248) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:276) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) 2018-10-16 18:57:10.333 - stderr> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:600) 2018-10-16 18:57:10.333 - stderr> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2018-10-16 18:57:10.333 - stderr> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2018-10-16 18:57:10.333 - stderr> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2018-10-16 18:57:10.334 - stderr> at java.lang.reflect.Method.invoke(Method.java:497) 2018-10-16 18:57:10.334 - stderr> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 2018-10-16 18:57:10.334 - stderr> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 2018-10-16 18:57:10.334 - stderr> at py4j.Gateway.invoke(Gateway.java:282) 2018-10-16 18:57:10.334 - stderr> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 2018-10-16 18:57:10.334 - stderr> at py4j.commands.CallCommand.execute(CallCommand.java:79) 2018-10-16 18:57:10.334 - stderr> at py4j.GatewayConnection.run(GatewayConnection.java:238) 2018-10-16 18:57:10.334 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.37 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: create_table: Table(tableName:data_source_tbl_0, dbName:default, owner:jenkins, createTime:1539741427, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303/data_source_tbl_0, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null)) 2018-10-16 18:57:10.37 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=create_table: Table(tableName:data_source_tbl_0, dbName:default, owner:jenkins, createTime:1539741427, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303/data_source_tbl_0, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null)) 2018-10-16 18:57:10.378 - stderr> 18/10/16 18:57:10 INFO log: Updating table stats fast for data_source_tbl_0 2018-10-16 18:57:10.378 - stderr> 18/10/16 18:57:10 INFO log: Updated size of table data_source_tbl_0 to 8 2018-10-16 18:57:10.605 - stderr> 18/10/16 18:57:10 INFO SparkSqlParser: Parsing command: create table hive_compatible_data_source_tbl_0 using parquet as select 1 i 2018-10-16 18:57:10.61 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_table : db=default tbl=hive_compatible_data_source_tbl_0 2018-10-16 18:57:10.61 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_table : db=default tbl=hive_compatible_data_source_tbl_0 2018-10-16 18:57:10.617 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default 2018-10-16 18:57:10.617 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-10-16 18:57:10.619 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default 2018-10-16 18:57:10.619 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-10-16 18:57:10.64 - stderr> 18/10/16 18:57:10 INFO ParquetFileFormat: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter 2018-10-16 18:57:10.667 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 2018-10-16 18:57:10.668 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-10-16 18:57:10.668 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 2018-10-16 18:57:10.669 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-10-16 18:57:10.717 - stderr> 18/10/16 18:57:10 INFO SparkContext: Starting job: sql at NativeMethodAccessorImpl.java:0 2018-10-16 18:57:10.718 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Got job 1 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions 2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Final stage: ResultStage 1 (sql at NativeMethodAccessorImpl.java:0) 2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Parents of final stage: List() 2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Missing parents: List() 2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[7] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents 2018-10-16 18:57:10.755 - stderr> 18/10/16 18:57:10 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 78.8 KB, free 366.1 MB) 2018-10-16 18:57:10.757 - stderr> 18/10/16 18:57:10 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 29.3 KB, free 366.1 MB) 2018-10-16 18:57:10.757 - stderr> 18/10/16 18:57:10 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.10.26:38843 (size: 29.3 KB, free: 366.2 MB) 2018-10-16 18:57:10.758 - stderr> 18/10/16 18:57:10 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1005 2018-10-16 18:57:10.758 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[7] at sql at NativeMethodAccessorImpl.java:0) 2018-10-16 18:57:10.758 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks 2018-10-16 18:57:10.761 - stderr> 18/10/16 18:57:10 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 6315 bytes) 2018-10-16 18:57:10.762 - stderr> 18/10/16 18:57:10 INFO Executor: Running task 0.0 in stage 1.0 (TID 1) 2018-10-16 18:57:10.776 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 2018-10-16 18:57:10.777 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-10-16 18:57:10.777 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 2018-10-16 18:57:10.777 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-10-16 18:57:10.779 - stderr> 18/10/16 18:57:10 INFO CodecConfig: Compression: SNAPPY 2018-10-16 18:57:10.779 - stderr> 18/10/16 18:57:10 INFO CodecConfig: Compression: SNAPPY 2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Parquet block size to 134217728 2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Parquet page size to 1048576 2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576 2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Dictionary is on 2018-10-16 18:57:10.783 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Validation is off 2018-10-16 18:57:10.783 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 2018-10-16 18:57:10.783 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Maximum row group padding size is 0 bytes 2018-10-16 18:57:10.797 - stderr> 18/10/16 18:57:10 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema: 2018-10-16 18:57:10.797 - stderr> { 2018-10-16 18:57:10.797 - stderr> "type" : "struct", 2018-10-16 18:57:10.797 - stderr> "fields" : [ { 2018-10-16 18:57:10.797 - stderr> "name" : "i", 2018-10-16 18:57:10.797 - stderr> "type" : "integer", 2018-10-16 18:57:10.797 - stderr> "nullable" : false, 2018-10-16 18:57:10.797 - stderr> "metadata" : { } 2018-10-16 18:57:10.797 - stderr> } ] 2018-10-16 18:57:10.797 - stderr> } 2018-10-16 18:57:10.797 - stderr> and corresponding Parquet message type: 2018-10-16 18:57:10.797 - stderr> message spark_schema { 2018-10-16 18:57:10.797 - stderr> required int32 i; 2018-10-16 18:57:10.797 - stderr> } 2018-10-16 18:57:10.797 - stderr> 2018-10-16 18:57:10.797 - stderr> 2018-10-16 18:57:10.829 - stderr> 18/10/16 18:57:10 INFO CodecPool: Got brand-new compressor [.snappy] 2018-10-16 18:57:10.871 - stderr> 18/10/16 18:57:10 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 8 2018-10-16 18:57:10.923 - stderr> java.io.FileNotFoundException: /tmp/test-spark/spark-2.1.3/jars/snappy-java-1.1.2.6.jar (No such file or directory) 2018-10-16 18:57:10.924 - stderr> java.lang.NullPointerException 2018-10-16 18:57:10.924 - stderr> at org.xerial.snappy.SnappyLoader.extractLibraryFile(SnappyLoader.java:232) 2018-10-16 18:57:10.924 - stderr> at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:344) 2018-10-16 18:57:10.924 - stderr> at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171) 2018-10-16 18:57:10.924 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152) 2018-10-16 18:57:10.924 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-10-16 18:57:10.924 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-10-16 18:57:10.924 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.924 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:100) 2018-10-16 18:57:10.924 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) 2018-10-16 18:57:10.924 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-10-16 18:57:10.924 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-10-16 18:57:10.924 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.928 - stderr> 18/10/16 18:57:10 ERROR Utils: Aborting task 2018-10-16 18:57:10.928 - stderr> org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-10-16 18:57:10.928 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-10-16 18:57:10.928 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-10-16 18:57:10.928 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-10-16 18:57:10.928 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.928 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.928 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.928 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.928 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191) 2018-10-16 18:57:10.928 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188) 2018-10-16 18:57:10.928 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356) 2018-10-16 18:57:10.929 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) 2018-10-16 18:57:10.929 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) 2018-10-16 18:57:10.929 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) 2018-10-16 18:57:10.929 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-10-16 18:57:10.929 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:100) 2018-10-16 18:57:10.929 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) 2018-10-16 18:57:10.929 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-10-16 18:57:10.929 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-10-16 18:57:10.929 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.929 - stderr> 18/10/16 18:57:10 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 1,024 2018-10-16 18:57:10.931 - stderr> 18/10/16 18:57:10 ERROR FileFormatWriter: Job job_20181016185710_0001 aborted. 2018-10-16 18:57:10.932 - stderr> 18/10/16 18:57:10 WARN Utils: Suppressing exception in catch: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-10-16 18:57:10.932 - stderr> java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-10-16 18:57:10.932 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165) 2018-10-16 18:57:10.932 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157) 2018-10-16 18:57:10.932 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263) 2018-10-16 18:57:10.932 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163) 2018-10-16 18:57:10.932 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.932 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:100) 2018-10-16 18:57:10.932 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) 2018-10-16 18:57:10.932 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-10-16 18:57:10.932 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-10-16 18:57:10.932 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.934 - stderr> 18/10/16 18:57:10 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1) 2018-10-16 18:57:10.934 - stderr> org.apache.spark.SparkException: Task failed while writing rows 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:100) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) 2018-10-16 18:57:10.934 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-10-16 18:57:10.934 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-10-16 18:57:10.934 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.934 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-10-16 18:57:10.934 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-10-16 18:57:10.934 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-10-16 18:57:10.934 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-10-16 18:57:10.934 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) 2018-10-16 18:57:10.934 - stderr> ... 8 more 2018-10-16 18:57:10.934 - stderr> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.934 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196) 2018-10-16 18:57:10.934 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365) 2018-10-16 18:57:10.934 - stderr> ... 9 more 2018-10-16 18:57:10.957 - stderr> 18/10/16 18:57:10 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:100) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) 2018-10-16 18:57:10.957 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-10-16 18:57:10.957 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-10-16 18:57:10.957 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.957 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-10-16 18:57:10.957 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-10-16 18:57:10.957 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-10-16 18:57:10.957 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-10-16 18:57:10.957 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) 2018-10-16 18:57:10.957 - stderr> ... 8 more 2018-10-16 18:57:10.957 - stderr> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.957 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196) 2018-10-16 18:57:10.957 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365) 2018-10-16 18:57:10.957 - stderr> ... 9 more 2018-10-16 18:57:10.957 - stderr> 2018-10-16 18:57:10.958 - stderr> 18/10/16 18:57:10 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job 2018-10-16 18:57:10.959 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 2018-10-16 18:57:10.965 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Cancelling stage 1 2018-10-16 18:57:10.966 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: ResultStage 1 (sql at NativeMethodAccessorImpl.java:0) failed in 0.207 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:100) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) 2018-10-16 18:57:10.966 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-10-16 18:57:10.966 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-10-16 18:57:10.966 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.966 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-10-16 18:57:10.966 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-10-16 18:57:10.966 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-10-16 18:57:10.966 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-10-16 18:57:10.966 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) 2018-10-16 18:57:10.966 - stderr> ... 8 more 2018-10-16 18:57:10.966 - stderr> Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113) 2018-10-16 18:57:10.966 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196) 2018-10-16 18:57:10.966 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365) 2018-10-16 18:57:10.966 - stderr> ... 9 more 2018-10-16 18:57:10.966 - stderr> 2018-10-16 18:57:10.966 - stderr> Driver stacktrace: 2018-10-16 18:57:10.967 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Job 1 failed: sql at NativeMethodAccessorImpl.java:0, took 0.249831 s 2018-10-16 18:57:10.969 - stderr> 18/10/16 18:57:10 ERROR FileFormatWriter: Aborting job null. 2018-10-16 18:57:10.969 - stderr> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows 2018-10-16 18:57:10.969 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204) 2018-10-16 18:57:10.969 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) 2018-10-16 18:57:10.969 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) 2018-10-16 18:57:10.969 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-10-16 18:57:10.969 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:100) 2018-10-16 18:57:10.969 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) 2018-10-16 18:57:10.969 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2018-10-16 18:57:10.969 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 2018-10-16 18:57:10.969 - stderr> at java.lang.Thread.run(Thread.java:745) 2018-10-16 18:57:10.969 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-10-16 18:57:10.969 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-10-16 18:57:10.969 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-10-16 18:57:10.969 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-10-16 18:57:10.969 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-10-16 18:57:10.969 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-10-16 18:57:10.969 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-10-16 18:57:10.969 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnCh

sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1.
Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303' '--conf' 'spark.sql.test.version.index=0' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py'

2018-10-16 18:56:41.949 - stderr> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
2018-10-16 18:56:41.956 - stderr> 18/10/16 18:56:41 INFO SparkContext: Running Spark version 2.1.3
2018-10-16 18:56:42.224 - stderr> 18/10/16 18:56:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-10-16 18:56:42.347 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing view acls to: jenkins
2018-10-16 18:56:42.347 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing modify acls to: jenkins
2018-10-16 18:56:42.348 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing view acls groups to: 
2018-10-16 18:56:42.348 - stderr> 18/10/16 18:56:42 INFO SecurityManager: Changing modify acls groups to: 
2018-10-16 18:56:42.349 - stderr> 18/10/16 18:56:42 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(jenkins); groups with view permissions: Set(); users  with modify permissions: Set(jenkins); groups with modify permissions: Set()
2018-10-16 18:56:42.659 - stderr> 18/10/16 18:56:42 INFO Utils: Successfully started service 'sparkDriver' on port 43918.
2018-10-16 18:56:42.728 - stderr> 18/10/16 18:56:42 INFO SparkEnv: Registering MapOutputTracker
2018-10-16 18:56:42.754 - stderr> 18/10/16 18:56:42 INFO SparkEnv: Registering BlockManagerMaster
2018-10-16 18:56:42.758 - stderr> 18/10/16 18:56:42 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2018-10-16 18:56:42.759 - stderr> 18/10/16 18:56:42 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
2018-10-16 18:56:42.775 - stderr> 18/10/16 18:56:42 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-96c64ee9-a4c4-476b-af29-c18e5bc122c0
2018-10-16 18:56:42.799 - stderr> 18/10/16 18:56:42 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
2018-10-16 18:56:42.866 - stderr> 18/10/16 18:56:42 INFO SparkEnv: Registering OutputCommitCoordinator
2018-10-16 18:56:43.086 - stderr> 18/10/16 18:56:43 INFO SparkContext: Added file file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py at file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py with timestamp 1539741403085
2018-10-16 18:56:43.088 - stderr> 18/10/16 18:56:43 INFO Utils: Copying /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py to /tmp/spark-5f33797a-9d38-4e40-9c2c-b5ecfdd0848c/userFiles-fe3ffd46-aef0-4c46-acce-49e17f01cdc3/test3079620590171554897.py
2018-10-16 18:56:43.149 - stderr> 18/10/16 18:56:43 INFO Executor: Starting executor ID driver on host localhost
2018-10-16 18:56:43.165 - stderr> 18/10/16 18:56:43 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38843.
2018-10-16 18:56:43.166 - stderr> 18/10/16 18:56:43 INFO NettyBlockTransferService: Server created on 192.168.10.26:38843
2018-10-16 18:56:43.167 - stderr> 18/10/16 18:56:43 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2018-10-16 18:56:43.169 - stderr> 18/10/16 18:56:43 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.10.26, 38843, None)
2018-10-16 18:56:43.171 - stderr> 18/10/16 18:56:43 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.10.26:38843 with 366.3 MB RAM, BlockManagerId(driver, 192.168.10.26, 38843, None)
2018-10-16 18:56:43.174 - stderr> 18/10/16 18:56:43 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.10.26, 38843, None)
2018-10-16 18:56:43.175 - stderr> 18/10/16 18:56:43 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.10.26, 38843, None)
2018-10-16 18:56:43.445 - stderr> 18/10/16 18:56:43 INFO SharedState: Warehouse path is '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303'.
2018-10-16 18:56:43.54 - stderr> 18/10/16 18:56:43 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
2018-10-16 18:56:44.123 - stderr> 18/10/16 18:56:44 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2018-10-16 18:56:44.144 - stderr> 18/10/16 18:56:44 INFO ObjectStore: ObjectStore, initialize called
2018-10-16 18:56:44.265 - stderr> 18/10/16 18:56:44 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
2018-10-16 18:56:44.266 - stderr> 18/10/16 18:56:44 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
2018-10-16 18:56:56.968 - stderr> 18/10/16 18:56:56 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2018-10-16 18:56:58.225 - stderr> 18/10/16 18:56:58 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2018-10-16 18:56:58.226 - stderr> 18/10/16 18:56:58 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2018-10-16 18:57:03.011 - stderr> 18/10/16 18:57:03 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2018-10-16 18:57:03.012 - stderr> 18/10/16 18:57:03 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2018-10-16 18:57:04.405 - stderr> 18/10/16 18:57:04 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
2018-10-16 18:57:04.407 - stderr> 18/10/16 18:57:04 INFO ObjectStore: Initialized ObjectStore
2018-10-16 18:57:04.669 - stderr> 18/10/16 18:57:04 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
2018-10-16 18:57:05.033 - stderr> 18/10/16 18:57:05 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
2018-10-16 18:57:05.405 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: Added admin role in metastore
2018-10-16 18:57:05.413 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: Added public role in metastore
2018-10-16 18:57:05.78 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: No user is added in admin role, since config is empty
2018-10-16 18:57:05.877 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: 0: get_all_databases
2018-10-16 18:57:05.878 - stderr> 18/10/16 18:57:05 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_all_databases	
2018-10-16 18:57:05.893 - stderr> 18/10/16 18:57:05 INFO HiveMetaStore: 0: get_functions: db=default pat=*
2018-10-16 18:57:05.893 - stderr> 18/10/16 18:57:05 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_functions: db=default pat=*	
2018-10-16 18:57:05.894 - stderr> 18/10/16 18:57:05 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
2018-10-16 18:57:07.007 - stderr> 18/10/16 18:57:07 INFO SessionState: Created local directory: /tmp/593afbbf-9352-45fc-8734-37eae36b8893_resources
2018-10-16 18:57:07.009 - stderr> 18/10/16 18:57:07 INFO SessionState: Created HDFS directory: /tmp/hive/jenkins/593afbbf-9352-45fc-8734-37eae36b8893
2018-10-16 18:57:07.012 - stderr> 18/10/16 18:57:07 INFO SessionState: Created local directory: /tmp/jenkins/593afbbf-9352-45fc-8734-37eae36b8893
2018-10-16 18:57:07.016 - stderr> 18/10/16 18:57:07 INFO SessionState: Created HDFS directory: /tmp/hive/jenkins/593afbbf-9352-45fc-8734-37eae36b8893/_tmp_space.db
2018-10-16 18:57:07.018 - stderr> 18/10/16 18:57:07 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303
2018-10-16 18:57:07.026 - stderr> 18/10/16 18:57:07 INFO HiveMetaStore: 0: get_database: default
2018-10-16 18:57:07.027 - stderr> 18/10/16 18:57:07 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-10-16 18:57:07.048 - stderr> 18/10/16 18:57:07 INFO HiveMetaStore: 0: get_database: global_temp
2018-10-16 18:57:07.048 - stderr> 18/10/16 18:57:07 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: global_temp	
2018-10-16 18:57:07.049 - stderr> 18/10/16 18:57:07 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
2018-10-16 18:57:07.117 - stderr> 18/10/16 18:57:07 INFO SparkSqlParser: Parsing command: create table data_source_tbl_0 using json as select 1 i
2018-10-16 18:57:09.008 - stderr> 18/10/16 18:57:09 INFO HiveMetaStore: 0: get_table : db=default tbl=data_source_tbl_0
2018-10-16 18:57:09.008 - stderr> 18/10/16 18:57:09 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_table : db=default tbl=data_source_tbl_0	
2018-10-16 18:57:09.063 - stderr> 18/10/16 18:57:09 INFO HiveMetaStore: 0: get_database: default
2018-10-16 18:57:09.063 - stderr> 18/10/16 18:57:09 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-10-16 18:57:09.066 - stderr> 18/10/16 18:57:09 INFO HiveMetaStore: 0: get_database: default
2018-10-16 18:57:09.066 - stderr> 18/10/16 18:57:09 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-10-16 18:57:09.225 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
2018-10-16 18:57:09.225 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
2018-10-16 18:57:09.225 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2018-10-16 18:57:09.226 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
2018-10-16 18:57:09.226 - stderr> 18/10/16 18:57:09 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
2018-10-16 18:57:09.228 - stderr> 18/10/16 18:57:09 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
2018-10-16 18:57:09.229 - stderr> 18/10/16 18:57:09 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-10-16 18:57:09.493 - stderr> 18/10/16 18:57:09 INFO CodeGenerator: Code generated in 226.972236 ms
2018-10-16 18:57:09.637 - stderr> 18/10/16 18:57:09 INFO SparkContext: Starting job: sql at NativeMethodAccessorImpl.java:0
2018-10-16 18:57:09.656 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Got job 0 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions
2018-10-16 18:57:09.656 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Final stage: ResultStage 0 (sql at NativeMethodAccessorImpl.java:0)
2018-10-16 18:57:09.657 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Parents of final stage: List()
2018-10-16 18:57:09.658 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Missing parents: List()
2018-10-16 18:57:09.663 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents
2018-10-16 18:57:09.775 - stderr> 18/10/16 18:57:09 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 80.7 KB, free 366.2 MB)
2018-10-16 18:57:09.802 - stderr> 18/10/16 18:57:09 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 31.1 KB, free 366.2 MB)
2018-10-16 18:57:09.804 - stderr> 18/10/16 18:57:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.10.26:38843 (size: 31.1 KB, free: 366.3 MB)
2018-10-16 18:57:09.806 - stderr> 18/10/16 18:57:09 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1005
2018-10-16 18:57:09.81 - stderr> 18/10/16 18:57:09 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0)
2018-10-16 18:57:09.811 - stderr> 18/10/16 18:57:09 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
2018-10-16 18:57:09.858 - stderr> 18/10/16 18:57:09 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 6314 bytes)
2018-10-16 18:57:09.87 - stderr> 18/10/16 18:57:09 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
2018-10-16 18:57:09.902 - stderr> 18/10/16 18:57:09 INFO Executor: Fetching file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py with timestamp 1539741403085
2018-10-16 18:57:09.933 - stderr> 18/10/16 18:57:09 INFO Utils: /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/test3079620590171554897.py has been previously copied to /tmp/spark-5f33797a-9d38-4e40-9c2c-b5ecfdd0848c/userFiles-fe3ffd46-aef0-4c46-acce-49e17f01cdc3/test3079620590171554897.py
2018-10-16 18:57:10.014 - stderr> 18/10/16 18:57:10 INFO CodeGenerator: Code generated in 8.987251 ms
2018-10-16 18:57:10.018 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
2018-10-16 18:57:10.019 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-10-16 18:57:10.067 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: Saved output of task 'attempt_20181016185710_0000_m_000000_0' to file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303/data_source_tbl_0/_temporary/0/task_20181016185710_0000_m_000000
2018-10-16 18:57:10.068 - stderr> 18/10/16 18:57:10 INFO SparkHadoopMapRedUtil: attempt_20181016185710_0000_m_000000_0: Committed
2018-10-16 18:57:10.082 - stderr> 18/10/16 18:57:10 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1626 bytes result sent to driver
2018-10-16 18:57:10.095 - stderr> 18/10/16 18:57:10 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 257 ms on localhost (executor driver) (1/1)
2018-10-16 18:57:10.097 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
2018-10-16 18:57:10.099 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: ResultStage 0 (sql at NativeMethodAccessorImpl.java:0) finished in 0.278 s
2018-10-16 18:57:10.105 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Job 0 finished: sql at NativeMethodAccessorImpl.java:0, took 0.467732 s
2018-10-16 18:57:10.131 - stderr> 18/10/16 18:57:10 INFO FileFormatWriter: Job null committed.
2018-10-16 18:57:10.171 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default
2018-10-16 18:57:10.171 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-10-16 18:57:10.174 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default
2018-10-16 18:57:10.174 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-10-16 18:57:10.176 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_table : db=default tbl=data_source_tbl_0
2018-10-16 18:57:10.176 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_table : db=default tbl=data_source_tbl_0	
2018-10-16 18:57:10.204 - stderr> 18/10/16 18:57:10 WARN HiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`data_source_tbl_0` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
2018-10-16 18:57:10.331 - stderr> java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@1869bb21
2018-10-16 18:57:10.331 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
2018-10-16 18:57:10.331 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
2018-10-16 18:57:10.331 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
2018-10-16 18:57:10.331 - stderr> 	at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
2018-10-16 18:57:10.331 - stderr> 	at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
2018-10-16 18:57:10.331 - stderr> 	at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
2018-10-16 18:57:10.332 - stderr> 	at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
2018-10-16 18:57:10.332 - stderr> 	at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
2018-10-16 18:57:10.332 - stderr> 	at org.joda.time.base.BaseDateTime.<init>(BaseDateTime.java:198)
2018-10-16 18:57:10.332 - stderr> 	at org.joda.time.DateTime.<init>(DateTime.java:476)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hive.common.util.TimestampParser.<clinit>(TimestampParser.java:49)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyTimestampObjectInspector.<init>(LazyTimestampObjectInspector.java:38)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.<clinit>(LazyPrimitiveObjectInspectorFactory.java:72)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:324)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:336)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyStructInspector(LazyFactory.java:431)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:128)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:53)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:197)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:698)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply$mcV$sp(HiveClientImpl.scala:425)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:425)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:425)
2018-10-16 18:57:10.332 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:279)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:226)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:225)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:268)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:424)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.saveTableIntoHive(HiveExternalCatalog.scala:455)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$createDataSourceTable(HiveExternalCatalog.scala:364)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:239)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:199)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:199)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:199)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:248)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:276)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
2018-10-16 18:57:10.333 - stderr> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:600)
2018-10-16 18:57:10.333 - stderr> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018-10-16 18:57:10.333 - stderr> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018-10-16 18:57:10.333 - stderr> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018-10-16 18:57:10.334 - stderr> 	at java.lang.reflect.Method.invoke(Method.java:497)
2018-10-16 18:57:10.334 - stderr> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
2018-10-16 18:57:10.334 - stderr> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
2018-10-16 18:57:10.334 - stderr> 	at py4j.Gateway.invoke(Gateway.java:282)
2018-10-16 18:57:10.334 - stderr> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
2018-10-16 18:57:10.334 - stderr> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
2018-10-16 18:57:10.334 - stderr> 	at py4j.GatewayConnection.run(GatewayConnection.java:238)
2018-10-16 18:57:10.334 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.37 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: create_table: Table(tableName:data_source_tbl_0, dbName:default, owner:jenkins, createTime:1539741427, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303/data_source_tbl_0, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))
2018-10-16 18:57:10.37 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=create_table: Table(tableName:data_source_tbl_0, dbName:default, owner:jenkins, createTime:1539741427, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.7/target/tmp/warehouse-d005436d-6646-41bb-a2fa-b03ac2d79303/data_source_tbl_0, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))	
2018-10-16 18:57:10.378 - stderr> 18/10/16 18:57:10 INFO log: Updating table stats fast for data_source_tbl_0
2018-10-16 18:57:10.378 - stderr> 18/10/16 18:57:10 INFO log: Updated size of table data_source_tbl_0 to 8
2018-10-16 18:57:10.605 - stderr> 18/10/16 18:57:10 INFO SparkSqlParser: Parsing command: create table hive_compatible_data_source_tbl_0 using parquet as select 1 i
2018-10-16 18:57:10.61 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_table : db=default tbl=hive_compatible_data_source_tbl_0
2018-10-16 18:57:10.61 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_table : db=default tbl=hive_compatible_data_source_tbl_0	
2018-10-16 18:57:10.617 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default
2018-10-16 18:57:10.617 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-10-16 18:57:10.619 - stderr> 18/10/16 18:57:10 INFO HiveMetaStore: 0: get_database: default
2018-10-16 18:57:10.619 - stderr> 18/10/16 18:57:10 INFO audit: ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-10-16 18:57:10.64 - stderr> 18/10/16 18:57:10 INFO ParquetFileFormat: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter
2018-10-16 18:57:10.667 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
2018-10-16 18:57:10.668 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-10-16 18:57:10.668 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
2018-10-16 18:57:10.669 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-10-16 18:57:10.717 - stderr> 18/10/16 18:57:10 INFO SparkContext: Starting job: sql at NativeMethodAccessorImpl.java:0
2018-10-16 18:57:10.718 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Got job 1 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions
2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Final stage: ResultStage 1 (sql at NativeMethodAccessorImpl.java:0)
2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Parents of final stage: List()
2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Missing parents: List()
2018-10-16 18:57:10.719 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[7] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents
2018-10-16 18:57:10.755 - stderr> 18/10/16 18:57:10 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 78.8 KB, free 366.1 MB)
2018-10-16 18:57:10.757 - stderr> 18/10/16 18:57:10 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 29.3 KB, free 366.1 MB)
2018-10-16 18:57:10.757 - stderr> 18/10/16 18:57:10 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.10.26:38843 (size: 29.3 KB, free: 366.2 MB)
2018-10-16 18:57:10.758 - stderr> 18/10/16 18:57:10 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1005
2018-10-16 18:57:10.758 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[7] at sql at NativeMethodAccessorImpl.java:0)
2018-10-16 18:57:10.758 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
2018-10-16 18:57:10.761 - stderr> 18/10/16 18:57:10 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 6315 bytes)
2018-10-16 18:57:10.762 - stderr> 18/10/16 18:57:10 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
2018-10-16 18:57:10.776 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
2018-10-16 18:57:10.777 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-10-16 18:57:10.777 - stderr> 18/10/16 18:57:10 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
2018-10-16 18:57:10.777 - stderr> 18/10/16 18:57:10 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-10-16 18:57:10.779 - stderr> 18/10/16 18:57:10 INFO CodecConfig: Compression: SNAPPY
2018-10-16 18:57:10.779 - stderr> 18/10/16 18:57:10 INFO CodecConfig: Compression: SNAPPY
2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Parquet block size to 134217728
2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Parquet page size to 1048576
2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576
2018-10-16 18:57:10.782 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Dictionary is on
2018-10-16 18:57:10.783 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Validation is off
2018-10-16 18:57:10.783 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0
2018-10-16 18:57:10.783 - stderr> 18/10/16 18:57:10 INFO ParquetOutputFormat: Maximum row group padding size is 0 bytes
2018-10-16 18:57:10.797 - stderr> 18/10/16 18:57:10 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
2018-10-16 18:57:10.797 - stderr> {
2018-10-16 18:57:10.797 - stderr>   "type" : "struct",
2018-10-16 18:57:10.797 - stderr>   "fields" : [ {
2018-10-16 18:57:10.797 - stderr>     "name" : "i",
2018-10-16 18:57:10.797 - stderr>     "type" : "integer",
2018-10-16 18:57:10.797 - stderr>     "nullable" : false,
2018-10-16 18:57:10.797 - stderr>     "metadata" : { }
2018-10-16 18:57:10.797 - stderr>   } ]
2018-10-16 18:57:10.797 - stderr> }
2018-10-16 18:57:10.797 - stderr> and corresponding Parquet message type:
2018-10-16 18:57:10.797 - stderr> message spark_schema {
2018-10-16 18:57:10.797 - stderr>   required int32 i;
2018-10-16 18:57:10.797 - stderr> }
2018-10-16 18:57:10.797 - stderr> 
2018-10-16 18:57:10.797 - stderr>        
2018-10-16 18:57:10.829 - stderr> 18/10/16 18:57:10 INFO CodecPool: Got brand-new compressor [.snappy]
2018-10-16 18:57:10.871 - stderr> 18/10/16 18:57:10 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 8
2018-10-16 18:57:10.923 - stderr> java.io.FileNotFoundException: /tmp/test-spark/spark-2.1.3/jars/snappy-java-1.1.2.6.jar (No such file or directory)
2018-10-16 18:57:10.924 - stderr> java.lang.NullPointerException
2018-10-16 18:57:10.924 - stderr> 	at org.xerial.snappy.SnappyLoader.extractLibraryFile(SnappyLoader.java:232)
2018-10-16 18:57:10.924 - stderr> 	at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:344)
2018-10-16 18:57:10.924 - stderr> 	at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
2018-10-16 18:57:10.924 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
2018-10-16 18:57:10.924 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:100)
2018-10-16 18:57:10.924 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
2018-10-16 18:57:10.924 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-10-16 18:57:10.924 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-10-16 18:57:10.924 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.928 - stderr> 18/10/16 18:57:10 ERROR Utils: Aborting task
2018-10-16 18:57:10.928 - stderr> org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-10-16 18:57:10.928 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-10-16 18:57:10.928 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
2018-10-16 18:57:10.928 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
2018-10-16 18:57:10.929 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
2018-10-16 18:57:10.929 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
2018-10-16 18:57:10.929 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
2018-10-16 18:57:10.929 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-10-16 18:57:10.929 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:100)
2018-10-16 18:57:10.929 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
2018-10-16 18:57:10.929 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-10-16 18:57:10.929 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-10-16 18:57:10.929 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.929 - stderr> 18/10/16 18:57:10 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 1,024
2018-10-16 18:57:10.931 - stderr> 18/10/16 18:57:10 ERROR FileFormatWriter: Job job_20181016185710_0001 aborted.
2018-10-16 18:57:10.932 - stderr> 18/10/16 18:57:10 WARN Utils: Suppressing exception in catch: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-10-16 18:57:10.932 - stderr> java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-10-16 18:57:10.932 - stderr> 	at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:100)
2018-10-16 18:57:10.932 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
2018-10-16 18:57:10.932 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-10-16 18:57:10.932 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-10-16 18:57:10.932 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.934 - stderr> 18/10/16 18:57:10 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
2018-10-16 18:57:10.934 - stderr> org.apache.spark.SparkException: Task failed while writing rows
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:100)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
2018-10-16 18:57:10.934 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-10-16 18:57:10.934 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-10-16 18:57:10.934 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.934 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-10-16 18:57:10.934 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-10-16 18:57:10.934 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
2018-10-16 18:57:10.934 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
2018-10-16 18:57:10.934 - stderr> 	... 8 more
2018-10-16 18:57:10.934 - stderr> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-10-16 18:57:10.934 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
2018-10-16 18:57:10.934 - stderr> 		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
2018-10-16 18:57:10.934 - stderr> 		... 9 more
2018-10-16 18:57:10.957 - stderr> 18/10/16 18:57:10 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:100)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
2018-10-16 18:57:10.957 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-10-16 18:57:10.957 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-10-16 18:57:10.957 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.957 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-10-16 18:57:10.957 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-10-16 18:57:10.957 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
2018-10-16 18:57:10.957 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
2018-10-16 18:57:10.957 - stderr> 	... 8 more
2018-10-16 18:57:10.957 - stderr> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-10-16 18:57:10.957 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
2018-10-16 18:57:10.957 - stderr> 		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
2018-10-16 18:57:10.957 - stderr> 		... 9 more
2018-10-16 18:57:10.957 - stderr> 
2018-10-16 18:57:10.958 - stderr> 18/10/16 18:57:10 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
2018-10-16 18:57:10.959 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
2018-10-16 18:57:10.965 - stderr> 18/10/16 18:57:10 INFO TaskSchedulerImpl: Cancelling stage 1
2018-10-16 18:57:10.966 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: ResultStage 1 (sql at NativeMethodAccessorImpl.java:0) failed in 0.207 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:100)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
2018-10-16 18:57:10.966 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-10-16 18:57:10.966 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-10-16 18:57:10.966 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.966 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-10-16 18:57:10.966 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-10-16 18:57:10.966 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:240)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:126)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:164)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:191)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1356)
2018-10-16 18:57:10.966 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
2018-10-16 18:57:10.966 - stderr> 	... 8 more
2018-10-16 18:57:10.966 - stderr> 	Suppressed: java.io.IOException: The file being written is in an invalid state. Probably caused by an error thrown previously. Current state: BLOCK
2018-10-16 18:57:10.966 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.error(ParquetFileWriter.java:165)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter$STATE.startBlock(ParquetFileWriter.java:157)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.parquet.hadoop.ParquetFileWriter.startBlock(ParquetFileWriter.java:263)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:163)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:44)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:252)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$1.apply$mcV$sp(FileFormatWriter.scala:196)
2018-10-16 18:57:10.966 - stderr> 		at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1365)
2018-10-16 18:57:10.966 - stderr> 		... 9 more
2018-10-16 18:57:10.966 - stderr> 
2018-10-16 18:57:10.966 - stderr> Driver stacktrace:
2018-10-16 18:57:10.967 - stderr> 18/10/16 18:57:10 INFO DAGScheduler: Job 1 failed: sql at NativeMethodAccessorImpl.java:0, took 0.249831 s
2018-10-16 18:57:10.969 - stderr> 18/10/16 18:57:10 ERROR FileFormatWriter: Aborting job null.
2018-10-16 18:57:10.969 - stderr> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
2018-10-16 18:57:10.969 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:100)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
2018-10-16 18:57:10.969 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2018-10-16 18:57:10.969 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2018-10-16 18:57:10.969 - stderr> 	at java.lang.Thread.run(Thread.java:745)
2018-10-16 18:57:10.969 - stderr> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-10-16 18:57:10.969 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-10-16 18:57:10.969 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-10-16 18:57:10.969 - stderr> 	at org.apache.parquet.hadoop.ColumnChun