org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1. Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4' '--conf' 'spark.sql.test.version.index=2' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py' 2018-09-28 03:40:31.476 - stdout> 2018-09-28 03:40:31 WARN Utils:66 - Your hostname, amp-jenkins-staging-worker-01 resolves to a loopback address: 127.0.1.1; using 192.168.10.31 instead (on interface eno1) 2018-09-28 03:40:31.477 - stdout> 2018-09-28 03:40:31 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address 2018-09-28 03:40:31.997 - stdout> 2018-09-28 03:40:31 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-09-28 03:40:32.795 - stdout> 2018-09-28 03:40:32 INFO SparkContext:54 - Running Spark version 2.3.1 2018-09-28 03:40:32.83 - stdout> 2018-09-28 03:40:32 INFO SparkContext:54 - Submitted application: prepare testing tables 2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO SecurityManager:54 - Changing view acls to: jenkins 2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO SecurityManager:54 - Changing modify acls to: jenkins 2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO SecurityManager:54 - Changing view acls groups to: 2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO SecurityManager:54 - Changing modify acls groups to: 2018-09-28 03:40:32.907 - stdout> 2018-09-28 03:40:32 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jenkins); groups with view permissions: Set(); users with modify permissions: Set(jenkins); groups with modify permissions: Set() 2018-09-28 03:40:33.243 - stdout> 2018-09-28 03:40:33 INFO Utils:54 - Successfully started service 'sparkDriver' on port 40073. 2018-09-28 03:40:33.273 - stdout> 2018-09-28 03:40:33 INFO SparkEnv:54 - Registering MapOutputTracker 2018-09-28 03:40:33.308 - stdout> 2018-09-28 03:40:33 INFO SparkEnv:54 - Registering BlockManagerMaster 2018-09-28 03:40:33.313 - stdout> 2018-09-28 03:40:33 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 2018-09-28 03:40:33.313 - stdout> 2018-09-28 03:40:33 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up 2018-09-28 03:40:33.33 - stdout> 2018-09-28 03:40:33 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-502910f0-d410-45ba-b798-a4bf31cc4c7b 2018-09-28 03:40:33.362 - stdout> 2018-09-28 03:40:33 INFO MemoryStore:54 - MemoryStore started with capacity 366.3 MB 2018-09-28 03:40:33.384 - stdout> 2018-09-28 03:40:33 INFO SparkEnv:54 - Registering OutputCommitCoordinator 2018-09-28 03:40:33.706 - stdout> 2018-09-28 03:40:33 INFO SparkContext:54 - Added file file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py at file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py with timestamp 1538131233705 2018-09-28 03:40:33.71 - stdout> 2018-09-28 03:40:33 INFO Utils:54 - Copying /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py to /tmp/spark-94575c2e-47b8-48be-a640-0781b8b5bad1/userFiles-a7e77610-7793-4102-ae69-29e2100cbcd3/test8982217628038498998.py 2018-09-28 03:40:33.803 - stdout> 2018-09-28 03:40:33 INFO Executor:54 - Starting executor ID driver on host localhost 2018-09-28 03:40:33.835 - stdout> 2018-09-28 03:40:33 INFO Utils:54 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35027. 2018-09-28 03:40:33.837 - stdout> 2018-09-28 03:40:33 INFO NettyBlockTransferService:54 - Server created on 192.168.10.31:35027 2018-09-28 03:40:33.84 - stdout> 2018-09-28 03:40:33 INFO BlockManager:54 - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 2018-09-28 03:40:33.899 - stdout> 2018-09-28 03:40:33 INFO BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, 192.168.10.31, 35027, None) 2018-09-28 03:40:33.906 - stdout> 2018-09-28 03:40:33 INFO BlockManagerMasterEndpoint:54 - Registering block manager 192.168.10.31:35027 with 366.3 MB RAM, BlockManagerId(driver, 192.168.10.31, 35027, None) 2018-09-28 03:40:33.91 - stdout> 2018-09-28 03:40:33 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 192.168.10.31, 35027, None) 2018-09-28 03:40:33.911 - stdout> 2018-09-28 03:40:33 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 192.168.10.31, 35027, None) 2018-09-28 03:40:34.142 - stdout> 2018-09-28 03:40:34 INFO log:192 - Logging initialized @3419ms 2018-09-28 03:40:34.323 - stdout> 2018-09-28 03:40:34 INFO SharedState:54 - Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4'). 2018-09-28 03:40:34.324 - stdout> 2018-09-28 03:40:34 INFO SharedState:54 - Warehouse path is '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4'. 2018-09-28 03:40:34.95 - stdout> 2018-09-28 03:40:34 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint 2018-09-28 03:40:35.347 - stdout> 2018-09-28 03:40:35 INFO HiveUtils:54 - Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. 2018-09-28 03:40:35.97 - stdout> 2018-09-28 03:40:35 INFO HiveMetaStore:589 - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2018-09-28 03:40:35.996 - stdout> 2018-09-28 03:40:35 INFO ObjectStore:289 - ObjectStore, initialize called 2018-09-28 03:40:36.105 - stdout> 2018-09-28 03:40:36 INFO Persistence:77 - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 2018-09-28 03:40:36.105 - stdout> 2018-09-28 03:40:36 INFO Persistence:77 - Property datanucleus.cache.level2 unknown - will be ignored 2018-09-28 03:40:37.338 - stdout> 2018-09-28 03:40:37 INFO ObjectStore:370 - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2018-09-28 03:40:38.487 - stdout> 2018-09-28 03:40:38 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-09-28 03:40:38.487 - stdout> 2018-09-28 03:40:38 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-09-28 03:40:38.672 - stdout> 2018-09-28 03:40:38 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2018-09-28 03:40:38.672 - stdout> 2018-09-28 03:40:38 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2018-09-28 03:40:38.738 - stdout> 2018-09-28 03:40:38 INFO Query:77 - Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 2018-09-28 03:40:38.74 - stdout> 2018-09-28 03:40:38 INFO MetaStoreDirectSql:139 - Using direct SQL, underlying DB is DERBY 2018-09-28 03:40:38.742 - stdout> 2018-09-28 03:40:38 INFO ObjectStore:272 - Initialized ObjectStore 2018-09-28 03:40:38.918 - stdout> 2018-09-28 03:40:38 INFO HiveMetaStore:663 - Added admin role in metastore 2018-09-28 03:40:38.919 - stdout> 2018-09-28 03:40:38 INFO HiveMetaStore:672 - Added public role in metastore 2018-09-28 03:40:38.958 - stdout> 2018-09-28 03:40:38 INFO HiveMetaStore:712 - No user is added in admin role, since config is empty 2018-09-28 03:40:39.061 - stdout> 2018-09-28 03:40:39 INFO HiveMetaStore:746 - 0: get_all_databases 2018-09-28 03:40:39.062 - stdout> 2018-09-28 03:40:39 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_all_databases 2018-09-28 03:40:39.081 - stdout> 2018-09-28 03:40:39 INFO HiveMetaStore:746 - 0: get_functions: db=default pat=* 2018-09-28 03:40:39.081 - stdout> 2018-09-28 03:40:39 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_functions: db=default pat=* 2018-09-28 03:40:39.084 - stdout> 2018-09-28 03:40:39 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. 2018-09-28 03:40:39.161 - stdout> 2018-09-28 03:40:39 INFO SessionState:641 - Created local directory: /tmp/73fd88df-566a-4073-a2f8-2470dff4c516_resources 2018-09-28 03:40:39.166 - stdout> 2018-09-28 03:40:39 INFO SessionState:641 - Created HDFS directory: /tmp/hive/jenkins/73fd88df-566a-4073-a2f8-2470dff4c516 2018-09-28 03:40:39.171 - stdout> 2018-09-28 03:40:39 INFO SessionState:641 - Created local directory: /tmp/jenkins/73fd88df-566a-4073-a2f8-2470dff4c516 2018-09-28 03:40:39.175 - stdout> 2018-09-28 03:40:39 INFO SessionState:641 - Created HDFS directory: /tmp/hive/jenkins/73fd88df-566a-4073-a2f8-2470dff4c516/_tmp_space.db 2018-09-28 03:40:39.178 - stdout> 2018-09-28 03:40:39 INFO HiveClientImpl:54 - Warehouse location for Hive client (version 1.2.2) is /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4 2018-09-28 03:40:39.19 - stdout> 2018-09-28 03:40:39 INFO HiveMetaStore:746 - 0: get_database: default 2018-09-28 03:40:39.191 - stdout> 2018-09-28 03:40:39 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-09-28 03:40:39.196 - stdout> 2018-09-28 03:40:39 INFO HiveMetaStore:746 - 0: get_database: global_temp 2018-09-28 03:40:39.197 - stdout> 2018-09-28 03:40:39 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: global_temp 2018-09-28 03:40:39.199 - stdout> 2018-09-28 03:40:39 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException 2018-09-28 03:40:41.107 - stdout> 2018-09-28 03:40:41 INFO HiveMetaStore:746 - 0: get_table : db=default tbl=data_source_tbl_2 2018-09-28 03:40:41.107 - stdout> 2018-09-28 03:40:41 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_table : db=default tbl=data_source_tbl_2 2018-09-28 03:40:41.13 - stdout> 2018-09-28 03:40:41 INFO HiveMetaStore:746 - 0: get_database: default 2018-09-28 03:40:41.13 - stdout> 2018-09-28 03:40:41 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-09-28 03:40:41.132 - stdout> 2018-09-28 03:40:41 INFO HiveMetaStore:746 - 0: get_database: default 2018-09-28 03:40:41.133 - stdout> 2018-09-28 03:40:41 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-09-28 03:40:41.234 - stdout> 2018-09-28 03:40:41 INFO FileOutputCommitter:108 - File Output Committer Algorithm version is 1 2018-09-28 03:40:41.235 - stdout> 2018-09-28 03:40:41 INFO SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-09-28 03:40:41.659 - stdout> 2018-09-28 03:40:41 INFO CodeGenerator:54 - Code generated in 259.518177 ms 2018-09-28 03:40:41.888 - stdout> 2018-09-28 03:40:41 INFO SparkContext:54 - Starting job: sql at NativeMethodAccessorImpl.java:0 2018-09-28 03:40:41.919 - stdout> 2018-09-28 03:40:41 INFO DAGScheduler:54 - Got job 0 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions 2018-09-28 03:40:41.92 - stdout> 2018-09-28 03:40:41 INFO DAGScheduler:54 - Final stage: ResultStage 0 (sql at NativeMethodAccessorImpl.java:0) 2018-09-28 03:40:41.921 - stdout> 2018-09-28 03:40:41 INFO DAGScheduler:54 - Parents of final stage: List() 2018-09-28 03:40:41.925 - stdout> 2018-09-28 03:40:41 INFO DAGScheduler:54 - Missing parents: List() 2018-09-28 03:40:41.937 - stdout> 2018-09-28 03:40:41 INFO DAGScheduler:54 - Submitting ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents 2018-09-28 03:40:42.074 - stdout> 2018-09-28 03:40:42 INFO MemoryStore:54 - Block broadcast_0 stored as values in memory (estimated size 149.6 KB, free 366.2 MB) 2018-09-28 03:40:42.108 - stdout> 2018-09-28 03:40:42 INFO MemoryStore:54 - Block broadcast_0_piece0 stored as bytes in memory (estimated size 54.4 KB, free 366.1 MB) 2018-09-28 03:40:42.111 - stdout> 2018-09-28 03:40:42 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on 192.168.10.31:35027 (size: 54.4 KB, free: 366.2 MB) 2018-09-28 03:40:42.114 - stdout> 2018-09-28 03:40:42 INFO SparkContext:54 - Created broadcast 0 from broadcast at DAGScheduler.scala:1039 2018-09-28 03:40:42.126 - stdout> 2018-09-28 03:40:42 INFO DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0)) 2018-09-28 03:40:42.128 - stdout> 2018-09-28 03:40:42 INFO TaskSchedulerImpl:54 - Adding task set 0.0 with 1 tasks 2018-09-28 03:40:42.181 - stdout> 2018-09-28 03:40:42 INFO TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8067 bytes) 2018-09-28 03:40:42.194 - stdout> 2018-09-28 03:40:42 INFO Executor:54 - Running task 0.0 in stage 0.0 (TID 0) 2018-09-28 03:40:42.199 - stdout> 2018-09-28 03:40:42 INFO Executor:54 - Fetching file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py with timestamp 1538131233705 2018-09-28 03:40:42.235 - stdout> 2018-09-28 03:40:42 INFO Utils:54 - /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py has been previously copied to /tmp/spark-94575c2e-47b8-48be-a640-0781b8b5bad1/userFiles-a7e77610-7793-4102-ae69-29e2100cbcd3/test8982217628038498998.py 2018-09-28 03:40:42.372 - stdout> 2018-09-28 03:40:42 INFO CodeGenerator:54 - Code generated in 20.870431 ms 2018-09-28 03:40:42.38 - stdout> 2018-09-28 03:40:42 INFO FileOutputCommitter:108 - File Output Committer Algorithm version is 1 2018-09-28 03:40:42.381 - stdout> 2018-09-28 03:40:42 INFO SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2018-09-28 03:40:42.471 - stdout> 2018-09-28 03:40:42 INFO FileOutputCommitter:535 - Saved output of task 'attempt_20180928034042_0000_m_000000_0' to file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4/data_source_tbl_2/_temporary/0/task_20180928034042_0000_m_000000 2018-09-28 03:40:42.471 - stdout> 2018-09-28 03:40:42 INFO SparkHadoopMapRedUtil:54 - attempt_20180928034042_0000_m_000000_0: Committed 2018-09-28 03:40:42.492 - stdout> 2018-09-28 03:40:42 INFO Executor:54 - Finished task 0.0 in stage 0.0 (TID 0). 2133 bytes result sent to driver 2018-09-28 03:40:42.507 - stdout> 2018-09-28 03:40:42 INFO TaskSetManager:54 - Finished task 0.0 in stage 0.0 (TID 0) in 345 ms on localhost (executor driver) (1/1) 2018-09-28 03:40:42.513 - stdout> 2018-09-28 03:40:42 INFO TaskSchedulerImpl:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool 2018-09-28 03:40:42.524 - stdout> 2018-09-28 03:40:42 INFO DAGScheduler:54 - ResultStage 0 (sql at NativeMethodAccessorImpl.java:0) finished in 0.560 s 2018-09-28 03:40:42.533 - stdout> 2018-09-28 03:40:42 INFO DAGScheduler:54 - Job 0 finished: sql at NativeMethodAccessorImpl.java:0, took 0.644276 s 2018-09-28 03:40:42.555 - stdout> 2018-09-28 03:40:42 INFO FileFormatWriter:54 - Job null committed. 2018-09-28 03:40:42.563 - stdout> 2018-09-28 03:40:42 INFO FileFormatWriter:54 - Finished processing stats for job null. 2018-09-28 03:40:42.628 - stdout> 2018-09-28 03:40:42 INFO HiveMetaStore:746 - 0: get_database: default 2018-09-28 03:40:42.628 - stdout> 2018-09-28 03:40:42 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-09-28 03:40:42.633 - stdout> 2018-09-28 03:40:42 INFO HiveMetaStore:746 - 0: get_database: default 2018-09-28 03:40:42.633 - stdout> 2018-09-28 03:40:42 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-09-28 03:40:42.636 - stdout> 2018-09-28 03:40:42 INFO HiveMetaStore:746 - 0: get_table : db=default tbl=data_source_tbl_2 2018-09-28 03:40:42.637 - stdout> 2018-09-28 03:40:42 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_table : db=default tbl=data_source_tbl_2 2018-09-28 03:40:42.684 - stdout> 2018-09-28 03:40:42 WARN HiveExternalCatalog:66 - Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`data_source_tbl_2` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive. 2018-09-28 03:40:42.876 - stderr> java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@6678385 2018-09-28 03:40:42.876 - stderr> at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210) 2018-09-28 03:40:42.876 - stderr> at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127) 2018-09-28 03:40:42.876 - stderr> at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86) 2018-09-28 03:40:42.876 - stderr> at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514) 2018-09-28 03:40:42.876 - stderr> at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413) 2018-09-28 03:40:42.876 - stderr> at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216) 2018-09-28 03:40:42.876 - stderr> at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151) 2018-09-28 03:40:42.876 - stderr> at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79) 2018-09-28 03:40:42.877 - stderr> at org.joda.time.base.BaseDateTime.<init>(BaseDateTime.java:198) 2018-09-28 03:40:42.877 - stderr> at org.joda.time.DateTime.<init>(DateTime.java:476) 2018-09-28 03:40:42.877 - stderr> at org.apache.hive.common.util.TimestampParser.<clinit>(TimestampParser.java:49) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyTimestampObjectInspector.<init>(LazyTimestampObjectInspector.java:38) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.<clinit>(LazyPrimitiveObjectInspectorFactory.java:72) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:324) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:336) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyStructInspector(LazyFactory.java:431) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:128) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:53) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:197) 2018-09-28 03:40:42.877 - stderr> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:698) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply$mcV$sp(HiveClientImpl.scala:468) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:466) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:466) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:272) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:210) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:209) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:255) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:466) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.saveTableIntoHive(HiveExternalCatalog.scala:479) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$createDataSourceTable(HiveExternalCatalog.scala:379) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$doCreateTable$1.apply$mcV$sp(HiveExternalCatalog.scala:243) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$doCreateTable$1.apply(HiveExternalCatalog.scala:216) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$doCreateTable$1.apply(HiveExternalCatalog.scala:216) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.hive.HiveExternalCatalog.doCreateTable(HiveExternalCatalog.scala:216) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.createTable(ExternalCatalog.scala:119) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:304) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:184) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) 2018-09-28 03:40:42.877 - stderr> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:641) 2018-09-28 03:40:42.877 - stderr> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2018-09-28 03:40:42.877 - stderr> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2018-09-28 03:40:42.877 - stderr> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2018-09-28 03:40:42.877 - stderr> at java.lang.reflect.Method.invoke(Method.java:498) 2018-09-28 03:40:42.877 - stderr> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 2018-09-28 03:40:42.877 - stderr> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 2018-09-28 03:40:42.877 - stderr> at py4j.Gateway.invoke(Gateway.java:282) 2018-09-28 03:40:42.877 - stderr> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 2018-09-28 03:40:42.878 - stderr> at py4j.commands.CallCommand.execute(CallCommand.java:79) 2018-09-28 03:40:42.878 - stderr> at py4j.GatewayConnection.run(GatewayConnection.java:238) 2018-09-28 03:40:42.878 - stderr> at java.lang.Thread.run(Thread.java:748) 2018-09-28 03:40:42.932 - stdout> 2018-09-28 03:40:42 INFO HiveMetaStore:746 - 0: create_table: Table(tableName:data_source_tbl_2, dbName:default, owner:jenkins, createTime:1538131235, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4/data_source_tbl_2, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json, spark.sql.create.version=2.3.1}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null)) 2018-09-28 03:40:42.932 - stdout> 2018-09-28 03:40:42 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=create_table: Table(tableName:data_source_tbl_2, dbName:default, owner:jenkins, createTime:1538131235, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4/data_source_tbl_2, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json, spark.sql.create.version=2.3.1}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null)) 2018-09-28 03:40:42.942 - stdout> 2018-09-28 03:40:42 INFO log:217 - Updating table stats fast for data_source_tbl_2 2018-09-28 03:40:42.942 - stdout> 2018-09-28 03:40:42 INFO log:219 - Updated size of table data_source_tbl_2 to 8 2018-09-28 03:40:43.206 - stdout> 2018-09-28 03:40:43 INFO HiveMetaStore:746 - 0: get_table : db=default tbl=hive_compatible_data_source_tbl_2 2018-09-28 03:40:43.207 - stdout> 2018-09-28 03:40:43 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_table : db=default tbl=hive_compatible_data_source_tbl_2 2018-09-28 03:40:43.209 - stdout> 2018-09-28 03:40:43 INFO HiveMetaStore:746 - 0: get_database: default 2018-09-28 03:40:43.209 - stdout> 2018-09-28 03:40:43 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-09-28 03:40:43.211 - stdout> 2018-09-28 03:40:43 INFO HiveMetaStore:746 - 0: get_database: default 2018-09-28 03:40:43.211 - stdout> 2018-09-28 03:40:43 INFO audit:371 - ugi=jenkins ip=unknown-ip-addr cmd=get_database: default 2018-09-28 03:40:43.231 - stdout> 2018-09-28 03:40:43 INFO ParquetFileFormat:54 - Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter 2018-09-28 03:40:43.239 - stdout> 2018-09-28 03:40:43 INFO FileOutputCommitter:108 - File Output Committer Algorithm version is 1 2018-09-28 03:40:43.24 - stdout> 2018-09-28 03:40:43 INFO SQLHadoopMapReduceCommitProtocol:54 - Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-09-28 03:40:43.24 - stdout> 2018-09-28 03:40:43 INFO FileOutputCommitter:108 - File Output Committer Algorithm version is 1 2018-09-28 03:40:43.24 - stdout> 2018-09-28 03:40:43 INFO SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-09-28 03:40:43.28 - stdout> 2018-09-28 03:40:43 INFO SparkContext:54 - Starting job: sql at NativeMethodAccessorImpl.java:0 2018-09-28 03:40:43.282 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - Got job 1 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions 2018-09-28 03:40:43.282 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - Final stage: ResultStage 1 (sql at NativeMethodAccessorImpl.java:0) 2018-09-28 03:40:43.282 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - Parents of final stage: List() 2018-09-28 03:40:43.283 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - Missing parents: List() 2018-09-28 03:40:43.283 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - Submitting ResultStage 1 (MapPartitionsRDD[4] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents 2018-09-28 03:40:43.316 - stdout> 2018-09-28 03:40:43 INFO MemoryStore:54 - Block broadcast_1 stored as values in memory (estimated size 147.4 KB, free 366.0 MB) 2018-09-28 03:40:43.32 - stdout> 2018-09-28 03:40:43 INFO MemoryStore:54 - Block broadcast_1_piece0 stored as bytes in memory (estimated size 52.4 KB, free 365.9 MB) 2018-09-28 03:40:43.322 - stdout> 2018-09-28 03:40:43 INFO BlockManagerInfo:54 - Added broadcast_1_piece0 in memory on 192.168.10.31:35027 (size: 52.4 KB, free: 366.2 MB) 2018-09-28 03:40:43.323 - stdout> 2018-09-28 03:40:43 INFO SparkContext:54 - Created broadcast 1 from broadcast at DAGScheduler.scala:1039 2018-09-28 03:40:43.324 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at sql at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0)) 2018-09-28 03:40:43.324 - stdout> 2018-09-28 03:40:43 INFO TaskSchedulerImpl:54 - Adding task set 1.0 with 1 tasks 2018-09-28 03:40:43.327 - stdout> 2018-09-28 03:40:43 INFO TaskSetManager:54 - Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 8067 bytes) 2018-09-28 03:40:43.327 - stdout> 2018-09-28 03:40:43 INFO Executor:54 - Running task 0.0 in stage 1.0 (TID 1) 2018-09-28 03:40:43.358 - stdout> 2018-09-28 03:40:43 INFO FileOutputCommitter:108 - File Output Committer Algorithm version is 1 2018-09-28 03:40:43.358 - stdout> 2018-09-28 03:40:43 INFO SQLHadoopMapReduceCommitProtocol:54 - Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-09-28 03:40:43.358 - stdout> 2018-09-28 03:40:43 INFO FileOutputCommitter:108 - File Output Committer Algorithm version is 1 2018-09-28 03:40:43.359 - stdout> 2018-09-28 03:40:43 INFO SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 2018-09-28 03:40:43.362 - stdout> 2018-09-28 03:40:43 INFO CodecConfig:95 - Compression: SNAPPY 2018-09-28 03:40:43.364 - stdout> 2018-09-28 03:40:43 INFO CodecConfig:95 - Compression: SNAPPY 2018-09-28 03:40:43.383 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:329 - Parquet block size to 134217728 2018-09-28 03:40:43.383 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:330 - Parquet page size to 1048576 2018-09-28 03:40:43.383 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:331 - Parquet dictionary page size to 1048576 2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:332 - Dictionary is on 2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:333 - Validation is off 2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:334 - Writer version is: PARQUET_1_0 2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:335 - Maximum row group padding size is 0 bytes 2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:336 - Page size checking is: estimated 2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:337 - Min row count for page size check is: 100 2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO ParquetOutputFormat:338 - Max row count for page size check is: 10000 2018-09-28 03:40:43.415 - stdout> 2018-09-28 03:40:43 INFO ParquetWriteSupport:54 - Initialized Parquet WriteSupport with Catalyst schema: 2018-09-28 03:40:43.415 - stdout> { 2018-09-28 03:40:43.415 - stdout> "type" : "struct", 2018-09-28 03:40:43.415 - stdout> "fields" : [ { 2018-09-28 03:40:43.415 - stdout> "name" : "i", 2018-09-28 03:40:43.415 - stdout> "type" : "integer", 2018-09-28 03:40:43.415 - stdout> "nullable" : false, 2018-09-28 03:40:43.415 - stdout> "metadata" : { } 2018-09-28 03:40:43.415 - stdout> } ] 2018-09-28 03:40:43.415 - stdout> } 2018-09-28 03:40:43.415 - stdout> and corresponding Parquet message type: 2018-09-28 03:40:43.415 - stdout> message spark_schema { 2018-09-28 03:40:43.415 - stdout> required int32 i; 2018-09-28 03:40:43.415 - stdout> } 2018-09-28 03:40:43.415 - stdout> 2018-09-28 03:40:43.415 - stdout> 2018-09-28 03:40:43.445 - stdout> 2018-09-28 03:40:43 INFO CodecPool:153 - Got brand-new compressor [.snappy] 2018-09-28 03:40:43.492 - stdout> 2018-09-28 03:40:43 INFO InternalParquetRecordWriter:160 - Flushing mem columnStore to file. allocated memory: 8 2018-09-28 03:40:43.564 - stderr> java.io.FileNotFoundException: /tmp/test-spark/spark-2.3.1/jars/snappy-java-1.1.2.6.jar (No such file or directory) 2018-09-28 03:40:43.564 - stderr> java.lang.NullPointerException 2018-09-28 03:40:43.564 - stderr> at org.xerial.snappy.SnappyLoader.extractLibraryFile(SnappyLoader.java:232) 2018-09-28 03:40:43.564 - stderr> at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:344) 2018-09-28 03:40:43.564 - stderr> at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171) 2018-09-28 03:40:43.565 - stderr> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152) 2018-09-28 03:40:43.565 - stderr> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-09-28 03:40:43.565 - stderr> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-09-28 03:40:43.565 - stderr> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109) 2018-09-28 03:40:43.565 - stderr> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.scheduler.Task.run(Task.scala:109) 2018-09-28 03:40:43.565 - stderr> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) 2018-09-28 03:40:43.565 - stderr> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2018-09-28 03:40:43.565 - stderr> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2018-09-28 03:40:43.565 - stderr> at java.lang.Thread.run(Thread.java:748) 2018-09-28 03:40:43.57 - stdout> 2018-09-28 03:40:43 ERROR Utils:91 - Aborting task 2018-09-28 03:40:43.57 - stdout> org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-09-28 03:40:43.57 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-09-28 03:40:43.57 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-09-28 03:40:43.57 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-09-28 03:40:43.57 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109) 2018-09-28 03:40:43.57 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:109) 2018-09-28 03:40:43.57 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) 2018-09-28 03:40:43.57 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2018-09-28 03:40:43.57 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2018-09-28 03:40:43.57 - stdout> at java.lang.Thread.run(Thread.java:748) 2018-09-28 03:40:43.574 - stdout> 2018-09-28 03:40:43 ERROR FileFormatWriter:70 - Job job_20180928034043_0001 aborted. 2018-09-28 03:40:43.579 - stdout> 2018-09-28 03:40:43 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1) 2018-09-28 03:40:43.579 - stdout> org.apache.spark.SparkException: Task failed while writing rows. 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:109) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) 2018-09-28 03:40:43.579 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2018-09-28 03:40:43.579 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2018-09-28 03:40:43.579 - stdout> at java.lang.Thread.run(Thread.java:748) 2018-09-28 03:40:43.579 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-09-28 03:40:43.579 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-09-28 03:40:43.579 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-09-28 03:40:43.579 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-09-28 03:40:43.579 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109) 2018-09-28 03:40:43.579 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414) 2018-09-28 03:40:43.579 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) 2018-09-28 03:40:43.579 - stdout> ... 8 more 2018-09-28 03:40:43.608 - stdout> 2018-09-28 03:40:43 WARN TaskSetManager:66 - Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows. 2018-09-28 03:40:43.608 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285) 2018-09-28 03:40:43.608 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197) 2018-09-28 03:40:43.608 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196) 2018-09-28 03:40:43.608 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-09-28 03:40:43.608 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:109) 2018-09-28 03:40:43.608 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) 2018-09-28 03:40:43.608 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2018-09-28 03:40:43.608 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2018-09-28 03:40:43.608 - stdout> at java.lang.Thread.run(Thread.java:748) 2018-09-28 03:40:43.608 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-09-28 03:40:43.608 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-09-28 03:40:43.608 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-09-28 03:40:43.608 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-09-28 03:40:43.608 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-09-28 03:40:43.609 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109) 2018-09-28 03:40:43.609 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163) 2018-09-28 03:40:43.609 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42) 2018-09-28 03:40:43.609 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405) 2018-09-28 03:40:43.609 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396) 2018-09-28 03:40:43.609 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269) 2018-09-28 03:40:43.609 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267) 2018-09-28 03:40:43.609 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414) 2018-09-28 03:40:43.609 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) 2018-09-28 03:40:43.609 - stdout> ... 8 more 2018-09-28 03:40:43.609 - stdout> 2018-09-28 03:40:43.61 - stdout> 2018-09-28 03:40:43 ERROR TaskSetManager:70 - Task 0 in stage 1.0 failed 1 times; aborting job 2018-09-28 03:40:43.611 - stdout> 2018-09-28 03:40:43 INFO TaskSchedulerImpl:54 - Removed TaskSet 1.0, whose tasks have all completed, from pool 2018-09-28 03:40:43.616 - stdout> 2018-09-28 03:40:43 INFO TaskSchedulerImpl:54 - Cancelling stage 1 2018-09-28 03:40:43.618 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - ResultStage 1 (sql at NativeMethodAccessorImpl.java:0) failed in 0.331 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows. 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:109) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) 2018-09-28 03:40:43.618 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2018-09-28 03:40:43.618 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2018-09-28 03:40:43.618 - stdout> at java.lang.Thread.run(Thread.java:748) 2018-09-28 03:40:43.618 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-09-28 03:40:43.618 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-09-28 03:40:43.618 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-09-28 03:40:43.618 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-09-28 03:40:43.618 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109) 2018-09-28 03:40:43.618 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414) 2018-09-28 03:40:43.618 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) 2018-09-28 03:40:43.618 - stdout> ... 8 more 2018-09-28 03:40:43.618 - stdout> 2018-09-28 03:40:43.618 - stdout> Driver stacktrace: 2018-09-28 03:40:43.619 - stdout> 2018-09-28 03:40:43 INFO DAGScheduler:54 - Job 1 failed: sql at NativeMethodAccessorImpl.java:0, took 0.338829 s 2018-09-28 03:40:43.622 - stdout> 2018-09-28 03:40:43 ERROR FileFormatWriter:91 - Aborting job null. 2018-09-28 03:40:43.622 - stdout> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows. 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.Task.run(Task.scala:109) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) 2018-09-28 03:40:43.622 - stdout> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 2018-09-28 03:40:43.622 - stdout> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 2018-09-28 03:40:43.622 - stdout> at java.lang.Thread.run(Thread.java:748) 2018-09-28 03:40:43.622 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null 2018-09-28 03:40:43.622 - stdout> at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159) 2018-09-28 03:40:43.622 - stdout> at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) 2018-09-28 03:40:43.622 - stdout> at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) 2018-09-28 03:40:43.622 - stdout> at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109) 2018-09-28 03:40:43.622 - stdout> at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) 2018-09-28 03:40:43.622 - stdout> ... 8 more 2018-09-28 03:40:43.622 - stdout> 2018-09-28 03:40:43.622 - stdout> Driver stacktrace: 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589) 2018-09-28 03:40:43.622 - stdout> at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 2018-09-28 03:40:43.622 - stdout> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) 2018-09-28 03:40:43.622 - stdout> at scala.Option.foreach(Option.scala:257) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:194) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:528) 2018-09-28 03:40:43.622 - stdout> at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.saveDataIntoTable(createDataSourceTables.scala:216) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:176) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) 2018-09-28 03:40:43.623 - stdout> at org.apache.spark.sql.SparkSession.sql(Spark

sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1.
Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4' '--conf' 'spark.sql.test.version.index=2' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py'

2018-09-28 03:40:31.476 - stdout> 2018-09-28 03:40:31 WARN  Utils:66 - Your hostname, amp-jenkins-staging-worker-01 resolves to a loopback address: 127.0.1.1; using 192.168.10.31 instead (on interface eno1)
2018-09-28 03:40:31.477 - stdout> 2018-09-28 03:40:31 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
2018-09-28 03:40:31.997 - stdout> 2018-09-28 03:40:31 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-09-28 03:40:32.795 - stdout> 2018-09-28 03:40:32 INFO  SparkContext:54 - Running Spark version 2.3.1
2018-09-28 03:40:32.83 - stdout> 2018-09-28 03:40:32 INFO  SparkContext:54 - Submitted application: prepare testing tables
2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO  SecurityManager:54 - Changing view acls to: jenkins
2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO  SecurityManager:54 - Changing modify acls to: jenkins
2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO  SecurityManager:54 - Changing view acls groups to: 
2018-09-28 03:40:32.906 - stdout> 2018-09-28 03:40:32 INFO  SecurityManager:54 - Changing modify acls groups to: 
2018-09-28 03:40:32.907 - stdout> 2018-09-28 03:40:32 INFO  SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(jenkins); groups with view permissions: Set(); users  with modify permissions: Set(jenkins); groups with modify permissions: Set()
2018-09-28 03:40:33.243 - stdout> 2018-09-28 03:40:33 INFO  Utils:54 - Successfully started service 'sparkDriver' on port 40073.
2018-09-28 03:40:33.273 - stdout> 2018-09-28 03:40:33 INFO  SparkEnv:54 - Registering MapOutputTracker
2018-09-28 03:40:33.308 - stdout> 2018-09-28 03:40:33 INFO  SparkEnv:54 - Registering BlockManagerMaster
2018-09-28 03:40:33.313 - stdout> 2018-09-28 03:40:33 INFO  BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2018-09-28 03:40:33.313 - stdout> 2018-09-28 03:40:33 INFO  BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up
2018-09-28 03:40:33.33 - stdout> 2018-09-28 03:40:33 INFO  DiskBlockManager:54 - Created local directory at /tmp/blockmgr-502910f0-d410-45ba-b798-a4bf31cc4c7b
2018-09-28 03:40:33.362 - stdout> 2018-09-28 03:40:33 INFO  MemoryStore:54 - MemoryStore started with capacity 366.3 MB
2018-09-28 03:40:33.384 - stdout> 2018-09-28 03:40:33 INFO  SparkEnv:54 - Registering OutputCommitCoordinator
2018-09-28 03:40:33.706 - stdout> 2018-09-28 03:40:33 INFO  SparkContext:54 - Added file file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py at file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py with timestamp 1538131233705
2018-09-28 03:40:33.71 - stdout> 2018-09-28 03:40:33 INFO  Utils:54 - Copying /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py to /tmp/spark-94575c2e-47b8-48be-a640-0781b8b5bad1/userFiles-a7e77610-7793-4102-ae69-29e2100cbcd3/test8982217628038498998.py
2018-09-28 03:40:33.803 - stdout> 2018-09-28 03:40:33 INFO  Executor:54 - Starting executor ID driver on host localhost
2018-09-28 03:40:33.835 - stdout> 2018-09-28 03:40:33 INFO  Utils:54 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35027.
2018-09-28 03:40:33.837 - stdout> 2018-09-28 03:40:33 INFO  NettyBlockTransferService:54 - Server created on 192.168.10.31:35027
2018-09-28 03:40:33.84 - stdout> 2018-09-28 03:40:33 INFO  BlockManager:54 - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2018-09-28 03:40:33.899 - stdout> 2018-09-28 03:40:33 INFO  BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, 192.168.10.31, 35027, None)
2018-09-28 03:40:33.906 - stdout> 2018-09-28 03:40:33 INFO  BlockManagerMasterEndpoint:54 - Registering block manager 192.168.10.31:35027 with 366.3 MB RAM, BlockManagerId(driver, 192.168.10.31, 35027, None)
2018-09-28 03:40:33.91 - stdout> 2018-09-28 03:40:33 INFO  BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 192.168.10.31, 35027, None)
2018-09-28 03:40:33.911 - stdout> 2018-09-28 03:40:33 INFO  BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 192.168.10.31, 35027, None)
2018-09-28 03:40:34.142 - stdout> 2018-09-28 03:40:34 INFO  log:192 - Logging initialized @3419ms
2018-09-28 03:40:34.323 - stdout> 2018-09-28 03:40:34 INFO  SharedState:54 - Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4').
2018-09-28 03:40:34.324 - stdout> 2018-09-28 03:40:34 INFO  SharedState:54 - Warehouse path is '/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4'.
2018-09-28 03:40:34.95 - stdout> 2018-09-28 03:40:34 INFO  StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint
2018-09-28 03:40:35.347 - stdout> 2018-09-28 03:40:35 INFO  HiveUtils:54 - Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
2018-09-28 03:40:35.97 - stdout> 2018-09-28 03:40:35 INFO  HiveMetaStore:589 - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2018-09-28 03:40:35.996 - stdout> 2018-09-28 03:40:35 INFO  ObjectStore:289 - ObjectStore, initialize called
2018-09-28 03:40:36.105 - stdout> 2018-09-28 03:40:36 INFO  Persistence:77 - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
2018-09-28 03:40:36.105 - stdout> 2018-09-28 03:40:36 INFO  Persistence:77 - Property datanucleus.cache.level2 unknown - will be ignored
2018-09-28 03:40:37.338 - stdout> 2018-09-28 03:40:37 INFO  ObjectStore:370 - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
2018-09-28 03:40:38.487 - stdout> 2018-09-28 03:40:38 INFO  Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2018-09-28 03:40:38.487 - stdout> 2018-09-28 03:40:38 INFO  Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2018-09-28 03:40:38.672 - stdout> 2018-09-28 03:40:38 INFO  Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
2018-09-28 03:40:38.672 - stdout> 2018-09-28 03:40:38 INFO  Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
2018-09-28 03:40:38.738 - stdout> 2018-09-28 03:40:38 INFO  Query:77 - Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
2018-09-28 03:40:38.74 - stdout> 2018-09-28 03:40:38 INFO  MetaStoreDirectSql:139 - Using direct SQL, underlying DB is DERBY
2018-09-28 03:40:38.742 - stdout> 2018-09-28 03:40:38 INFO  ObjectStore:272 - Initialized ObjectStore
2018-09-28 03:40:38.918 - stdout> 2018-09-28 03:40:38 INFO  HiveMetaStore:663 - Added admin role in metastore
2018-09-28 03:40:38.919 - stdout> 2018-09-28 03:40:38 INFO  HiveMetaStore:672 - Added public role in metastore
2018-09-28 03:40:38.958 - stdout> 2018-09-28 03:40:38 INFO  HiveMetaStore:712 - No user is added in admin role, since config is empty
2018-09-28 03:40:39.061 - stdout> 2018-09-28 03:40:39 INFO  HiveMetaStore:746 - 0: get_all_databases
2018-09-28 03:40:39.062 - stdout> 2018-09-28 03:40:39 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_all_databases	
2018-09-28 03:40:39.081 - stdout> 2018-09-28 03:40:39 INFO  HiveMetaStore:746 - 0: get_functions: db=default pat=*
2018-09-28 03:40:39.081 - stdout> 2018-09-28 03:40:39 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_functions: db=default pat=*	
2018-09-28 03:40:39.084 - stdout> 2018-09-28 03:40:39 INFO  Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
2018-09-28 03:40:39.161 - stdout> 2018-09-28 03:40:39 INFO  SessionState:641 - Created local directory: /tmp/73fd88df-566a-4073-a2f8-2470dff4c516_resources
2018-09-28 03:40:39.166 - stdout> 2018-09-28 03:40:39 INFO  SessionState:641 - Created HDFS directory: /tmp/hive/jenkins/73fd88df-566a-4073-a2f8-2470dff4c516
2018-09-28 03:40:39.171 - stdout> 2018-09-28 03:40:39 INFO  SessionState:641 - Created local directory: /tmp/jenkins/73fd88df-566a-4073-a2f8-2470dff4c516
2018-09-28 03:40:39.175 - stdout> 2018-09-28 03:40:39 INFO  SessionState:641 - Created HDFS directory: /tmp/hive/jenkins/73fd88df-566a-4073-a2f8-2470dff4c516/_tmp_space.db
2018-09-28 03:40:39.178 - stdout> 2018-09-28 03:40:39 INFO  HiveClientImpl:54 - Warehouse location for Hive client (version 1.2.2) is /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4
2018-09-28 03:40:39.19 - stdout> 2018-09-28 03:40:39 INFO  HiveMetaStore:746 - 0: get_database: default
2018-09-28 03:40:39.191 - stdout> 2018-09-28 03:40:39 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-09-28 03:40:39.196 - stdout> 2018-09-28 03:40:39 INFO  HiveMetaStore:746 - 0: get_database: global_temp
2018-09-28 03:40:39.197 - stdout> 2018-09-28 03:40:39 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: global_temp	
2018-09-28 03:40:39.199 - stdout> 2018-09-28 03:40:39 WARN  ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
2018-09-28 03:40:41.107 - stdout> 2018-09-28 03:40:41 INFO  HiveMetaStore:746 - 0: get_table : db=default tbl=data_source_tbl_2
2018-09-28 03:40:41.107 - stdout> 2018-09-28 03:40:41 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_table : db=default tbl=data_source_tbl_2	
2018-09-28 03:40:41.13 - stdout> 2018-09-28 03:40:41 INFO  HiveMetaStore:746 - 0: get_database: default
2018-09-28 03:40:41.13 - stdout> 2018-09-28 03:40:41 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-09-28 03:40:41.132 - stdout> 2018-09-28 03:40:41 INFO  HiveMetaStore:746 - 0: get_database: default
2018-09-28 03:40:41.133 - stdout> 2018-09-28 03:40:41 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-09-28 03:40:41.234 - stdout> 2018-09-28 03:40:41 INFO  FileOutputCommitter:108 - File Output Committer Algorithm version is 1
2018-09-28 03:40:41.235 - stdout> 2018-09-28 03:40:41 INFO  SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-09-28 03:40:41.659 - stdout> 2018-09-28 03:40:41 INFO  CodeGenerator:54 - Code generated in 259.518177 ms
2018-09-28 03:40:41.888 - stdout> 2018-09-28 03:40:41 INFO  SparkContext:54 - Starting job: sql at NativeMethodAccessorImpl.java:0
2018-09-28 03:40:41.919 - stdout> 2018-09-28 03:40:41 INFO  DAGScheduler:54 - Got job 0 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions
2018-09-28 03:40:41.92 - stdout> 2018-09-28 03:40:41 INFO  DAGScheduler:54 - Final stage: ResultStage 0 (sql at NativeMethodAccessorImpl.java:0)
2018-09-28 03:40:41.921 - stdout> 2018-09-28 03:40:41 INFO  DAGScheduler:54 - Parents of final stage: List()
2018-09-28 03:40:41.925 - stdout> 2018-09-28 03:40:41 INFO  DAGScheduler:54 - Missing parents: List()
2018-09-28 03:40:41.937 - stdout> 2018-09-28 03:40:41 INFO  DAGScheduler:54 - Submitting ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents
2018-09-28 03:40:42.074 - stdout> 2018-09-28 03:40:42 INFO  MemoryStore:54 - Block broadcast_0 stored as values in memory (estimated size 149.6 KB, free 366.2 MB)
2018-09-28 03:40:42.108 - stdout> 2018-09-28 03:40:42 INFO  MemoryStore:54 - Block broadcast_0_piece0 stored as bytes in memory (estimated size 54.4 KB, free 366.1 MB)
2018-09-28 03:40:42.111 - stdout> 2018-09-28 03:40:42 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on 192.168.10.31:35027 (size: 54.4 KB, free: 366.2 MB)
2018-09-28 03:40:42.114 - stdout> 2018-09-28 03:40:42 INFO  SparkContext:54 - Created broadcast 0 from broadcast at DAGScheduler.scala:1039
2018-09-28 03:40:42.126 - stdout> 2018-09-28 03:40:42 INFO  DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at sql at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
2018-09-28 03:40:42.128 - stdout> 2018-09-28 03:40:42 INFO  TaskSchedulerImpl:54 - Adding task set 0.0 with 1 tasks
2018-09-28 03:40:42.181 - stdout> 2018-09-28 03:40:42 INFO  TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8067 bytes)
2018-09-28 03:40:42.194 - stdout> 2018-09-28 03:40:42 INFO  Executor:54 - Running task 0.0 in stage 0.0 (TID 0)
2018-09-28 03:40:42.199 - stdout> 2018-09-28 03:40:42 INFO  Executor:54 - Fetching file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py with timestamp 1538131233705
2018-09-28 03:40:42.235 - stdout> 2018-09-28 03:40:42 INFO  Utils:54 - /home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/test8982217628038498998.py has been previously copied to /tmp/spark-94575c2e-47b8-48be-a640-0781b8b5bad1/userFiles-a7e77610-7793-4102-ae69-29e2100cbcd3/test8982217628038498998.py
2018-09-28 03:40:42.372 - stdout> 2018-09-28 03:40:42 INFO  CodeGenerator:54 - Code generated in 20.870431 ms
2018-09-28 03:40:42.38 - stdout> 2018-09-28 03:40:42 INFO  FileOutputCommitter:108 - File Output Committer Algorithm version is 1
2018-09-28 03:40:42.381 - stdout> 2018-09-28 03:40:42 INFO  SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2018-09-28 03:40:42.471 - stdout> 2018-09-28 03:40:42 INFO  FileOutputCommitter:535 - Saved output of task 'attempt_20180928034042_0000_m_000000_0' to file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4/data_source_tbl_2/_temporary/0/task_20180928034042_0000_m_000000
2018-09-28 03:40:42.471 - stdout> 2018-09-28 03:40:42 INFO  SparkHadoopMapRedUtil:54 - attempt_20180928034042_0000_m_000000_0: Committed
2018-09-28 03:40:42.492 - stdout> 2018-09-28 03:40:42 INFO  Executor:54 - Finished task 0.0 in stage 0.0 (TID 0). 2133 bytes result sent to driver
2018-09-28 03:40:42.507 - stdout> 2018-09-28 03:40:42 INFO  TaskSetManager:54 - Finished task 0.0 in stage 0.0 (TID 0) in 345 ms on localhost (executor driver) (1/1)
2018-09-28 03:40:42.513 - stdout> 2018-09-28 03:40:42 INFO  TaskSchedulerImpl:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool 
2018-09-28 03:40:42.524 - stdout> 2018-09-28 03:40:42 INFO  DAGScheduler:54 - ResultStage 0 (sql at NativeMethodAccessorImpl.java:0) finished in 0.560 s
2018-09-28 03:40:42.533 - stdout> 2018-09-28 03:40:42 INFO  DAGScheduler:54 - Job 0 finished: sql at NativeMethodAccessorImpl.java:0, took 0.644276 s
2018-09-28 03:40:42.555 - stdout> 2018-09-28 03:40:42 INFO  FileFormatWriter:54 - Job null committed.
2018-09-28 03:40:42.563 - stdout> 2018-09-28 03:40:42 INFO  FileFormatWriter:54 - Finished processing stats for job null.
2018-09-28 03:40:42.628 - stdout> 2018-09-28 03:40:42 INFO  HiveMetaStore:746 - 0: get_database: default
2018-09-28 03:40:42.628 - stdout> 2018-09-28 03:40:42 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-09-28 03:40:42.633 - stdout> 2018-09-28 03:40:42 INFO  HiveMetaStore:746 - 0: get_database: default
2018-09-28 03:40:42.633 - stdout> 2018-09-28 03:40:42 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-09-28 03:40:42.636 - stdout> 2018-09-28 03:40:42 INFO  HiveMetaStore:746 - 0: get_table : db=default tbl=data_source_tbl_2
2018-09-28 03:40:42.637 - stdout> 2018-09-28 03:40:42 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_table : db=default tbl=data_source_tbl_2	
2018-09-28 03:40:42.684 - stdout> 2018-09-28 03:40:42 WARN  HiveExternalCatalog:66 - Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`data_source_tbl_2` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
2018-09-28 03:40:42.876 - stderr> java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@6678385
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
2018-09-28 03:40:42.876 - stderr> 	at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
2018-09-28 03:40:42.877 - stderr> 	at org.joda.time.base.BaseDateTime.<init>(BaseDateTime.java:198)
2018-09-28 03:40:42.877 - stderr> 	at org.joda.time.DateTime.<init>(DateTime.java:476)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hive.common.util.TimestampParser.<clinit>(TimestampParser.java:49)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyTimestampObjectInspector.<init>(LazyTimestampObjectInspector.java:38)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyPrimitiveObjectInspectorFactory.<clinit>(LazyPrimitiveObjectInspectorFactory.java:72)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:324)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyObjectInspector(LazyFactory.java:336)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazyFactory.createLazyStructInspector(LazyFactory.java:431)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:128)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:53)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:197)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:698)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply$mcV$sp(HiveClientImpl.scala:468)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:466)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createTable$1.apply(HiveClientImpl.scala:466)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:272)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:210)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:209)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:255)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:466)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.saveTableIntoHive(HiveExternalCatalog.scala:479)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$createDataSourceTable(HiveExternalCatalog.scala:379)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$doCreateTable$1.apply$mcV$sp(HiveExternalCatalog.scala:243)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$doCreateTable$1.apply(HiveExternalCatalog.scala:216)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$doCreateTable$1.apply(HiveExternalCatalog.scala:216)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.hive.HiveExternalCatalog.doCreateTable(HiveExternalCatalog.scala:216)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.createTable(ExternalCatalog.scala:119)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:304)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:184)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
2018-09-28 03:40:42.877 - stderr> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:641)
2018-09-28 03:40:42.877 - stderr> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2018-09-28 03:40:42.877 - stderr> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2018-09-28 03:40:42.877 - stderr> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2018-09-28 03:40:42.877 - stderr> 	at java.lang.reflect.Method.invoke(Method.java:498)
2018-09-28 03:40:42.877 - stderr> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
2018-09-28 03:40:42.877 - stderr> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
2018-09-28 03:40:42.877 - stderr> 	at py4j.Gateway.invoke(Gateway.java:282)
2018-09-28 03:40:42.877 - stderr> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
2018-09-28 03:40:42.878 - stderr> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
2018-09-28 03:40:42.878 - stderr> 	at py4j.GatewayConnection.run(GatewayConnection.java:238)
2018-09-28 03:40:42.878 - stderr> 	at java.lang.Thread.run(Thread.java:748)
2018-09-28 03:40:42.932 - stdout> 2018-09-28 03:40:42 INFO  HiveMetaStore:746 - 0: create_table: Table(tableName:data_source_tbl_2, dbName:default, owner:jenkins, createTime:1538131235, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4/data_source_tbl_2, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json, spark.sql.create.version=2.3.1}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))
2018-09-28 03:40:42.932 - stdout> 2018-09-28 03:40:42 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=create_table: Table(tableName:data_source_tbl_2, dbName:default, owner:jenkins, createTime:1538131235, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col, type:array<string>, comment:from deserializer)], location:null, inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{path=file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-2.6-ubuntu-test/target/tmp/warehouse-3dd105c3-9487-4680-9820-51c5a1b544d4/data_source_tbl_2, serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"i","type":"integer","nullable":true,"metadata":{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=json, spark.sql.create.version=2.3.1}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))	
2018-09-28 03:40:42.942 - stdout> 2018-09-28 03:40:42 INFO  log:217 - Updating table stats fast for data_source_tbl_2
2018-09-28 03:40:42.942 - stdout> 2018-09-28 03:40:42 INFO  log:219 - Updated size of table data_source_tbl_2 to 8
2018-09-28 03:40:43.206 - stdout> 2018-09-28 03:40:43 INFO  HiveMetaStore:746 - 0: get_table : db=default tbl=hive_compatible_data_source_tbl_2
2018-09-28 03:40:43.207 - stdout> 2018-09-28 03:40:43 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_table : db=default tbl=hive_compatible_data_source_tbl_2	
2018-09-28 03:40:43.209 - stdout> 2018-09-28 03:40:43 INFO  HiveMetaStore:746 - 0: get_database: default
2018-09-28 03:40:43.209 - stdout> 2018-09-28 03:40:43 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-09-28 03:40:43.211 - stdout> 2018-09-28 03:40:43 INFO  HiveMetaStore:746 - 0: get_database: default
2018-09-28 03:40:43.211 - stdout> 2018-09-28 03:40:43 INFO  audit:371 - ugi=jenkins	ip=unknown-ip-addr	cmd=get_database: default	
2018-09-28 03:40:43.231 - stdout> 2018-09-28 03:40:43 INFO  ParquetFileFormat:54 - Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter
2018-09-28 03:40:43.239 - stdout> 2018-09-28 03:40:43 INFO  FileOutputCommitter:108 - File Output Committer Algorithm version is 1
2018-09-28 03:40:43.24 - stdout> 2018-09-28 03:40:43 INFO  SQLHadoopMapReduceCommitProtocol:54 - Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-09-28 03:40:43.24 - stdout> 2018-09-28 03:40:43 INFO  FileOutputCommitter:108 - File Output Committer Algorithm version is 1
2018-09-28 03:40:43.24 - stdout> 2018-09-28 03:40:43 INFO  SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-09-28 03:40:43.28 - stdout> 2018-09-28 03:40:43 INFO  SparkContext:54 - Starting job: sql at NativeMethodAccessorImpl.java:0
2018-09-28 03:40:43.282 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - Got job 1 (sql at NativeMethodAccessorImpl.java:0) with 1 output partitions
2018-09-28 03:40:43.282 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - Final stage: ResultStage 1 (sql at NativeMethodAccessorImpl.java:0)
2018-09-28 03:40:43.282 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - Parents of final stage: List()
2018-09-28 03:40:43.283 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - Missing parents: List()
2018-09-28 03:40:43.283 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - Submitting ResultStage 1 (MapPartitionsRDD[4] at sql at NativeMethodAccessorImpl.java:0), which has no missing parents
2018-09-28 03:40:43.316 - stdout> 2018-09-28 03:40:43 INFO  MemoryStore:54 - Block broadcast_1 stored as values in memory (estimated size 147.4 KB, free 366.0 MB)
2018-09-28 03:40:43.32 - stdout> 2018-09-28 03:40:43 INFO  MemoryStore:54 - Block broadcast_1_piece0 stored as bytes in memory (estimated size 52.4 KB, free 365.9 MB)
2018-09-28 03:40:43.322 - stdout> 2018-09-28 03:40:43 INFO  BlockManagerInfo:54 - Added broadcast_1_piece0 in memory on 192.168.10.31:35027 (size: 52.4 KB, free: 366.2 MB)
2018-09-28 03:40:43.323 - stdout> 2018-09-28 03:40:43 INFO  SparkContext:54 - Created broadcast 1 from broadcast at DAGScheduler.scala:1039
2018-09-28 03:40:43.324 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at sql at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
2018-09-28 03:40:43.324 - stdout> 2018-09-28 03:40:43 INFO  TaskSchedulerImpl:54 - Adding task set 1.0 with 1 tasks
2018-09-28 03:40:43.327 - stdout> 2018-09-28 03:40:43 INFO  TaskSetManager:54 - Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 8067 bytes)
2018-09-28 03:40:43.327 - stdout> 2018-09-28 03:40:43 INFO  Executor:54 - Running task 0.0 in stage 1.0 (TID 1)
2018-09-28 03:40:43.358 - stdout> 2018-09-28 03:40:43 INFO  FileOutputCommitter:108 - File Output Committer Algorithm version is 1
2018-09-28 03:40:43.358 - stdout> 2018-09-28 03:40:43 INFO  SQLHadoopMapReduceCommitProtocol:54 - Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-09-28 03:40:43.358 - stdout> 2018-09-28 03:40:43 INFO  FileOutputCommitter:108 - File Output Committer Algorithm version is 1
2018-09-28 03:40:43.359 - stdout> 2018-09-28 03:40:43 INFO  SQLHadoopMapReduceCommitProtocol:54 - Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
2018-09-28 03:40:43.362 - stdout> 2018-09-28 03:40:43 INFO  CodecConfig:95 - Compression: SNAPPY
2018-09-28 03:40:43.364 - stdout> 2018-09-28 03:40:43 INFO  CodecConfig:95 - Compression: SNAPPY
2018-09-28 03:40:43.383 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:329 - Parquet block size to 134217728
2018-09-28 03:40:43.383 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:330 - Parquet page size to 1048576
2018-09-28 03:40:43.383 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:331 - Parquet dictionary page size to 1048576
2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:332 - Dictionary is on
2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:333 - Validation is off
2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:334 - Writer version is: PARQUET_1_0
2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:335 - Maximum row group padding size is 0 bytes
2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:336 - Page size checking is: estimated
2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:337 - Min row count for page size check is: 100
2018-09-28 03:40:43.384 - stdout> 2018-09-28 03:40:43 INFO  ParquetOutputFormat:338 - Max row count for page size check is: 10000
2018-09-28 03:40:43.415 - stdout> 2018-09-28 03:40:43 INFO  ParquetWriteSupport:54 - Initialized Parquet WriteSupport with Catalyst schema:
2018-09-28 03:40:43.415 - stdout> {
2018-09-28 03:40:43.415 - stdout>   "type" : "struct",
2018-09-28 03:40:43.415 - stdout>   "fields" : [ {
2018-09-28 03:40:43.415 - stdout>     "name" : "i",
2018-09-28 03:40:43.415 - stdout>     "type" : "integer",
2018-09-28 03:40:43.415 - stdout>     "nullable" : false,
2018-09-28 03:40:43.415 - stdout>     "metadata" : { }
2018-09-28 03:40:43.415 - stdout>   } ]
2018-09-28 03:40:43.415 - stdout> }
2018-09-28 03:40:43.415 - stdout> and corresponding Parquet message type:
2018-09-28 03:40:43.415 - stdout> message spark_schema {
2018-09-28 03:40:43.415 - stdout>   required int32 i;
2018-09-28 03:40:43.415 - stdout> }
2018-09-28 03:40:43.415 - stdout> 
2018-09-28 03:40:43.415 - stdout>        
2018-09-28 03:40:43.445 - stdout> 2018-09-28 03:40:43 INFO  CodecPool:153 - Got brand-new compressor [.snappy]
2018-09-28 03:40:43.492 - stdout> 2018-09-28 03:40:43 INFO  InternalParquetRecordWriter:160 - Flushing mem columnStore to file. allocated memory: 8
2018-09-28 03:40:43.564 - stderr> java.io.FileNotFoundException: /tmp/test-spark/spark-2.3.1/jars/snappy-java-1.1.2.6.jar (No such file or directory)
2018-09-28 03:40:43.564 - stderr> java.lang.NullPointerException
2018-09-28 03:40:43.564 - stderr> 	at org.xerial.snappy.SnappyLoader.extractLibraryFile(SnappyLoader.java:232)
2018-09-28 03:40:43.564 - stderr> 	at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:344)
2018-09-28 03:40:43.564 - stderr> 	at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
2018-09-28 03:40:43.565 - stderr> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
2018-09-28 03:40:43.565 - stderr> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
2018-09-28 03:40:43.565 - stderr> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
2018-09-28 03:40:43.565 - stderr> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2018-09-28 03:40:43.565 - stderr> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2018-09-28 03:40:43.565 - stderr> 	at java.lang.Thread.run(Thread.java:748)
2018-09-28 03:40:43.57 - stdout> 2018-09-28 03:40:43 ERROR Utils:91 - Aborting task
2018-09-28 03:40:43.57 - stdout> org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-09-28 03:40:43.57 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-09-28 03:40:43.57 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
2018-09-28 03:40:43.57 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
2018-09-28 03:40:43.57 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2018-09-28 03:40:43.57 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2018-09-28 03:40:43.57 - stdout> 	at java.lang.Thread.run(Thread.java:748)
2018-09-28 03:40:43.574 - stdout> 2018-09-28 03:40:43 ERROR FileFormatWriter:70 - Job job_20180928034043_0001 aborted.
2018-09-28 03:40:43.579 - stdout> 2018-09-28 03:40:43 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1)
2018-09-28 03:40:43.579 - stdout> org.apache.spark.SparkException: Task failed while writing rows.
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
2018-09-28 03:40:43.579 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2018-09-28 03:40:43.579 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2018-09-28 03:40:43.579 - stdout> 	at java.lang.Thread.run(Thread.java:748)
2018-09-28 03:40:43.579 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-09-28 03:40:43.579 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-09-28 03:40:43.579 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
2018-09-28 03:40:43.579 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
2018-09-28 03:40:43.579 - stdout> 	... 8 more
2018-09-28 03:40:43.608 - stdout> 2018-09-28 03:40:43 WARN  TaskSetManager:66 - Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows.
2018-09-28 03:40:43.608 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)
2018-09-28 03:40:43.608 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
2018-09-28 03:40:43.608 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
2018-09-28 03:40:43.608 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-09-28 03:40:43.608 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
2018-09-28 03:40:43.608 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
2018-09-28 03:40:43.608 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2018-09-28 03:40:43.608 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2018-09-28 03:40:43.608 - stdout> 	at java.lang.Thread.run(Thread.java:748)
2018-09-28 03:40:43.608 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-09-28 03:40:43.608 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-09-28 03:40:43.608 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-09-28 03:40:43.608 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-09-28 03:40:43.608 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
2018-09-28 03:40:43.609 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
2018-09-28 03:40:43.609 - stdout> 	... 8 more
2018-09-28 03:40:43.609 - stdout> 
2018-09-28 03:40:43.61 - stdout> 2018-09-28 03:40:43 ERROR TaskSetManager:70 - Task 0 in stage 1.0 failed 1 times; aborting job
2018-09-28 03:40:43.611 - stdout> 2018-09-28 03:40:43 INFO  TaskSchedulerImpl:54 - Removed TaskSet 1.0, whose tasks have all completed, from pool 
2018-09-28 03:40:43.616 - stdout> 2018-09-28 03:40:43 INFO  TaskSchedulerImpl:54 - Cancelling stage 1
2018-09-28 03:40:43.618 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - ResultStage 1 (sql at NativeMethodAccessorImpl.java:0) failed in 0.331 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows.
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
2018-09-28 03:40:43.618 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2018-09-28 03:40:43.618 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2018-09-28 03:40:43.618 - stdout> 	at java.lang.Thread.run(Thread.java:748)
2018-09-28 03:40:43.618 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-09-28 03:40:43.618 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-09-28 03:40:43.618 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
2018-09-28 03:40:43.618 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
2018-09-28 03:40:43.618 - stdout> 	... 8 more
2018-09-28 03:40:43.618 - stdout> 
2018-09-28 03:40:43.618 - stdout> Driver stacktrace:
2018-09-28 03:40:43.619 - stdout> 2018-09-28 03:40:43 INFO  DAGScheduler:54 - Job 1 failed: sql at NativeMethodAccessorImpl.java:0, took 0.338829 s
2018-09-28 03:40:43.622 - stdout> 2018-09-28 03:40:43 ERROR FileFormatWriter:91 - Aborting job null.
2018-09-28 03:40:43.622 - stdout> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows.
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:197)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:196)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
2018-09-28 03:40:43.622 - stdout> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2018-09-28 03:40:43.622 - stdout> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2018-09-28 03:40:43.622 - stdout> 	at java.lang.Thread.run(Thread.java:748)
2018-09-28 03:40:43.622 - stdout> Caused by: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null
2018-09-28 03:40:43.622 - stdout> 	at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:159)
2018-09-28 03:40:43.622 - stdout> 	at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:93)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:150)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:238)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:121)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:167)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:109)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:163)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1414)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
2018-09-28 03:40:43.622 - stdout> 	... 8 more
2018-09-28 03:40:43.622 - stdout> 
2018-09-28 03:40:43.622 - stdout> Driver stacktrace:
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
2018-09-28 03:40:43.622 - stdout> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
2018-09-28 03:40:43.622 - stdout> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
2018-09-28 03:40:43.622 - stdout> 	at scala.Option.foreach(Option.scala:257)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:194)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:528)
2018-09-28 03:40:43.622 - stdout> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.saveDataIntoTable(createDataSourceTables.scala:216)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:176)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3254)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3253)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
2018-09-28 03:40:43.623 - stdout> 	at org.apache.spark.sq