org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1. Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.hive.metastore.version=1.2.1' '--conf' 'spark.sql.hive.metastore.jars=maven' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3' '--conf' 'spark.sql.test.version.index=1' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/test1514055354230278379.py' 2019-07-15 06:59:56.549 - stderr> 19/07/15 06:59:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2019-07-15 06:59:57.445 - stderr> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 2019-07-15 06:59:57.448 - stderr> 19/07/15 06:59:57 INFO SparkContext: Running Spark version 2.4.3 2019-07-15 06:59:57.475 - stderr> 19/07/15 06:59:57 INFO SparkContext: Submitted application: prepare testing tables 2019-07-15 06:59:57.544 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing view acls to: jenkins 2019-07-15 06:59:57.544 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing modify acls to: jenkins 2019-07-15 06:59:57.544 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing view acls groups to: 2019-07-15 06:59:57.545 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing modify acls groups to: 2019-07-15 06:59:57.545 - stderr> 19/07/15 06:59:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jenkins); groups with view permissions: Set(); users with modify permissions: Set(jenkins); groups with modify permissions: Set() 2019-07-15 06:59:57.929 - stderr> 19/07/15 06:59:57 INFO Utils: Successfully started service 'sparkDriver' on port 46876. 2019-07-15 06:59:57.968 - stderr> 19/07/15 06:59:57 INFO SparkEnv: Registering MapOutputTracker 2019-07-15 06:59:57.995 - stderr> 19/07/15 06:59:57 INFO SparkEnv: Registering BlockManagerMaster 2019-07-15 06:59:58.0 - stderr> 19/07/15 06:59:57 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 2019-07-15 06:59:58.0 - stderr> 19/07/15 06:59:58 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 2019-07-15 06:59:58.013 - stderr> 19/07/15 06:59:58 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-05890dd6-4218-42d9-86e3-e3960a8be76f 2019-07-15 06:59:58.034 - stderr> 19/07/15 06:59:58 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 2019-07-15 06:59:58.05 - stderr> 19/07/15 06:59:58 INFO SparkEnv: Registering OutputCommitCoordinator 2019-07-15 06:59:58.136 - stderr> 19/07/15 06:59:58 INFO Executor: Starting executor ID driver on host localhost 2019-07-15 06:59:58.204 - stderr> 19/07/15 06:59:58 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42554. 2019-07-15 06:59:58.204 - stderr> 19/07/15 06:59:58 INFO NettyBlockTransferService: Server created on amp-jenkins-worker-05.amp:42554 2019-07-15 06:59:58.206 - stderr> 19/07/15 06:59:58 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 2019-07-15 06:59:58.228 - stderr> 19/07/15 06:59:58 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None) 2019-07-15 06:59:58.231 - stderr> 19/07/15 06:59:58 INFO BlockManagerMasterEndpoint: Registering block manager amp-jenkins-worker-05.amp:42554 with 366.3 MB RAM, BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None) 2019-07-15 06:59:58.234 - stderr> 19/07/15 06:59:58 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None) 2019-07-15 06:59:58.234 - stderr> 19/07/15 06:59:58 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None) 2019-07-15 06:59:58.612 - stderr> 19/07/15 06:59:58 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3'). 2019-07-15 06:59:58.613 - stderr> 19/07/15 06:59:58 INFO SharedState: Warehouse path is '/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3'. 2019-07-15 06:59:59.134 - stderr> 19/07/15 06:59:59 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint 2019-07-15 07:00:00.965 - stdout> Traceback (most recent call last): 2019-07-15 07:00:00.965 - stdout> File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/test1514055354230278379.py", line 8, in <module> 2019-07-15 07:00:00.965 - stdout> spark.sql("create table data_source_tbl_{} using json as select 1 i".format(version_index)) 2019-07-15 07:00:00.965 - stdout> File "/tmp/test-spark/spark-2.4.3/python/lib/pyspark.zip/pyspark/sql/session.py", line 767, in sql 2019-07-15 07:00:00.965 - stdout> File "/tmp/test-spark/spark-2.4.3/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__ 2019-07-15 07:00:00.965 - stdout> File "/tmp/test-spark/spark-2.4.3/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco 2019-07-15 07:00:00.965 - stdout> File "/tmp/test-spark/spark-2.4.3/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value 2019-07-15 07:00:00.968 - stdout> py4j.protocol.Py4JJavaError: An error occurred while calling o29.sql. 2019-07-15 07:00:00.968 - stdout> : java.lang.ExceptionInInitializerError 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.SparkContext.withScope(SparkContext.scala:699) 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.SparkContext.parallelize(SparkContext.scala:716) 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.sql.execution.SparkStrategies.singleRowRdd$lzycompute(SparkStrategies.scala:470) 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.sql.execution.SparkStrategies.singleRowRdd(SparkStrategies.scala:470) 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:627) 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63) 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63) 2019-07-15 07:00:00.968 - stdout> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) 2019-07-15 07:00:00.968 - stdout> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) 2019-07-15 07:00:00.968 - stdout> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) 2019-07-15 07:00:00.968 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$class.foreach(Iterator.scala:891) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$class.foreach(Iterator.scala:891) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$class.foreach(Iterator.scala:891) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) 2019-07-15 07:00:00.969 - stdout> at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) 2019-07-15 07:00:00.969 - stdout> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3359) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) 2019-07-15 07:00:00.969 - stdout> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2019-07-15 07:00:00.969 - stdout> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2019-07-15 07:00:00.969 - stdout> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2019-07-15 07:00:00.969 - stdout> at java.lang.reflect.Method.invoke(Method.java:498) 2019-07-15 07:00:00.969 - stdout> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 2019-07-15 07:00:00.969 - stdout> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 2019-07-15 07:00:00.969 - stdout> at py4j.Gateway.invoke(Gateway.java:282) 2019-07-15 07:00:00.969 - stdout> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 2019-07-15 07:00:00.969 - stdout> at py4j.commands.CallCommand.execute(CallCommand.java:79) 2019-07-15 07:00:00.969 - stdout> at py4j.GatewayConnection.run(GatewayConnection.java:238) 2019-07-15 07:00:00.969 - stdout> at java.lang.Thread.run(Thread.java:748) 2019-07-15 07:00:00.969 - stdout> Caused by: java.util.NoSuchElementException: key not found: groupId 2019-07-15 07:00:00.969 - stdout> at scala.collection.MapLike$class.default(MapLike.scala:228) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractMap.default(Map.scala:59) 2019-07-15 07:00:00.969 - stdout> at scala.collection.MapLike$class.apply(MapLike.scala:141) 2019-07-15 07:00:00.969 - stdout> at scala.collection.AbstractMap.apply(Map.scala:59) 2019-07-15 07:00:00.969 - stdout> at com.fasterxml.jackson.module.scala.JacksonModule$.version$lzycompute(JacksonModule.scala:27) 2019-07-15 07:00:00.969 - stdout> at com.fasterxml.jackson.module.scala.JacksonModule$.version(JacksonModule.scala:26) 2019-07-15 07:00:00.969 - stdout> at com.fasterxml.jackson.module.scala.JacksonModule$class.version(JacksonModule.scala:49) 2019-07-15 07:00:00.969 - stdout> at com.fasterxml.jackson.module.scala.DefaultScalaModule.version(DefaultScalaModule.scala:19) 2019-07-15 07:00:00.969 - stdout> at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:710) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:82) 2019-07-15 07:00:00.969 - stdout> at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala) 2019-07-15 07:00:00.969 - stdout> ... 69 more 2019-07-15 07:00:00.969 - stdout> 2019-07-15 07:00:01.061 - stderr> 19/07/15 07:00:01 INFO SparkContext: Invoking stop() from shutdown hook 2019-07-15 07:00:01.085 - stderr> 19/07/15 07:00:01 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 2019-07-15 07:00:01.098 - stderr> 19/07/15 07:00:01 INFO MemoryStore: MemoryStore cleared 2019-07-15 07:00:01.099 - stderr> 19/07/15 07:00:01 INFO BlockManager: BlockManager stopped 2019-07-15 07:00:01.108 - stderr> 19/07/15 07:00:01 INFO BlockManagerMaster: BlockManagerMaster stopped 2019-07-15 07:00:01.114 - stderr> 19/07/15 07:00:01 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 2019-07-15 07:00:01.121 - stderr> 19/07/15 07:00:01 INFO SparkContext: Successfully stopped SparkContext 2019-07-15 07:00:01.122 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Shutdown hook called 2019-07-15 07:00:01.125 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Deleting directory /tmp/spark-f56344ee-1451-4f48-8a62-e345eec2578c/pyspark-9a5285c5-44c5-4eb8-a125-1aa816dfce6f 2019-07-15 07:00:01.129 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Deleting directory /tmp/spark-f56344ee-1451-4f48-8a62-e345eec2578c 2019-07-15 07:00:01.133 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Deleting directory /tmp/spark-b3f74184-d79d-4ec7-a6e8-740e47e6ad41

sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: spark-submit returned with exit code 1.
Command line: './bin/spark-submit' '--name' 'prepare testing tables' '--master' 'local[2]' '--conf' 'spark.ui.enabled=false' '--conf' 'spark.master.rest.enabled=false' '--conf' 'spark.sql.hive.metastore.version=1.2.1' '--conf' 'spark.sql.hive.metastore.jars=maven' '--conf' 'spark.sql.warehouse.dir=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3' '--conf' 'spark.sql.test.version.index=1' '--driver-java-options' '-Dderby.system.home=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3' '/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/test1514055354230278379.py'

2019-07-15 06:59:56.549 - stderr> 19/07/15 06:59:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-07-15 06:59:57.445 - stderr> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
2019-07-15 06:59:57.448 - stderr> 19/07/15 06:59:57 INFO SparkContext: Running Spark version 2.4.3
2019-07-15 06:59:57.475 - stderr> 19/07/15 06:59:57 INFO SparkContext: Submitted application: prepare testing tables
2019-07-15 06:59:57.544 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing view acls to: jenkins
2019-07-15 06:59:57.544 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing modify acls to: jenkins
2019-07-15 06:59:57.544 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing view acls groups to: 
2019-07-15 06:59:57.545 - stderr> 19/07/15 06:59:57 INFO SecurityManager: Changing modify acls groups to: 
2019-07-15 06:59:57.545 - stderr> 19/07/15 06:59:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(jenkins); groups with view permissions: Set(); users  with modify permissions: Set(jenkins); groups with modify permissions: Set()
2019-07-15 06:59:57.929 - stderr> 19/07/15 06:59:57 INFO Utils: Successfully started service 'sparkDriver' on port 46876.
2019-07-15 06:59:57.968 - stderr> 19/07/15 06:59:57 INFO SparkEnv: Registering MapOutputTracker
2019-07-15 06:59:57.995 - stderr> 19/07/15 06:59:57 INFO SparkEnv: Registering BlockManagerMaster
2019-07-15 06:59:58.0 - stderr> 19/07/15 06:59:57 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2019-07-15 06:59:58.0 - stderr> 19/07/15 06:59:58 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
2019-07-15 06:59:58.013 - stderr> 19/07/15 06:59:58 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-05890dd6-4218-42d9-86e3-e3960a8be76f
2019-07-15 06:59:58.034 - stderr> 19/07/15 06:59:58 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
2019-07-15 06:59:58.05 - stderr> 19/07/15 06:59:58 INFO SparkEnv: Registering OutputCommitCoordinator
2019-07-15 06:59:58.136 - stderr> 19/07/15 06:59:58 INFO Executor: Starting executor ID driver on host localhost
2019-07-15 06:59:58.204 - stderr> 19/07/15 06:59:58 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42554.
2019-07-15 06:59:58.204 - stderr> 19/07/15 06:59:58 INFO NettyBlockTransferService: Server created on amp-jenkins-worker-05.amp:42554
2019-07-15 06:59:58.206 - stderr> 19/07/15 06:59:58 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2019-07-15 06:59:58.228 - stderr> 19/07/15 06:59:58 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None)
2019-07-15 06:59:58.231 - stderr> 19/07/15 06:59:58 INFO BlockManagerMasterEndpoint: Registering block manager amp-jenkins-worker-05.amp:42554 with 366.3 MB RAM, BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None)
2019-07-15 06:59:58.234 - stderr> 19/07/15 06:59:58 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None)
2019-07-15 06:59:58.234 - stderr> 19/07/15 06:59:58 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, amp-jenkins-worker-05.amp, 42554, None)
2019-07-15 06:59:58.612 - stderr> 19/07/15 06:59:58 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3').
2019-07-15 06:59:58.613 - stderr> 19/07/15 06:59:58 INFO SharedState: Warehouse path is '/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/warehouse-8b9db627-80f2-4479-b915-1866fd0929c3'.
2019-07-15 06:59:59.134 - stderr> 19/07/15 06:59:59 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
2019-07-15 07:00:00.965 - stdout> Traceback (most recent call last):
2019-07-15 07:00:00.965 - stdout>   File "/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/sql/hive/target/tmp/org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite/test1514055354230278379.py", line 8, in <module>
2019-07-15 07:00:00.965 - stdout>     spark.sql("create table data_source_tbl_{} using json as select 1 i".format(version_index))
2019-07-15 07:00:00.965 - stdout>   File "/tmp/test-spark/spark-2.4.3/python/lib/pyspark.zip/pyspark/sql/session.py", line 767, in sql
2019-07-15 07:00:00.965 - stdout>   File "/tmp/test-spark/spark-2.4.3/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
2019-07-15 07:00:00.965 - stdout>   File "/tmp/test-spark/spark-2.4.3/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
2019-07-15 07:00:00.965 - stdout>   File "/tmp/test-spark/spark-2.4.3/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
2019-07-15 07:00:00.968 - stdout> py4j.protocol.Py4JJavaError: An error occurred while calling o29.sql.
2019-07-15 07:00:00.968 - stdout> : java.lang.ExceptionInInitializerError
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.SparkContext.withScope(SparkContext.scala:699)
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.SparkContext.parallelize(SparkContext.scala:716)
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.sql.execution.SparkStrategies.singleRowRdd$lzycompute(SparkStrategies.scala:470)
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.sql.execution.SparkStrategies.singleRowRdd(SparkStrategies.scala:470)
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:627)
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
2019-07-15 07:00:00.968 - stdout> 	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
2019-07-15 07:00:00.968 - stdout> 	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
2019-07-15 07:00:00.968 - stdout> 	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
2019-07-15 07:00:00.968 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3359)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
2019-07-15 07:00:00.969 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2019-07-15 07:00:00.969 - stdout> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2019-07-15 07:00:00.969 - stdout> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2019-07-15 07:00:00.969 - stdout> 	at java.lang.reflect.Method.invoke(Method.java:498)
2019-07-15 07:00:00.969 - stdout> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
2019-07-15 07:00:00.969 - stdout> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
2019-07-15 07:00:00.969 - stdout> 	at py4j.Gateway.invoke(Gateway.java:282)
2019-07-15 07:00:00.969 - stdout> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
2019-07-15 07:00:00.969 - stdout> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
2019-07-15 07:00:00.969 - stdout> 	at py4j.GatewayConnection.run(GatewayConnection.java:238)
2019-07-15 07:00:00.969 - stdout> 	at java.lang.Thread.run(Thread.java:748)
2019-07-15 07:00:00.969 - stdout> Caused by: java.util.NoSuchElementException: key not found: groupId
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.MapLike$class.default(MapLike.scala:228)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractMap.default(Map.scala:59)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.MapLike$class.apply(MapLike.scala:141)
2019-07-15 07:00:00.969 - stdout> 	at scala.collection.AbstractMap.apply(Map.scala:59)
2019-07-15 07:00:00.969 - stdout> 	at com.fasterxml.jackson.module.scala.JacksonModule$.version$lzycompute(JacksonModule.scala:27)
2019-07-15 07:00:00.969 - stdout> 	at com.fasterxml.jackson.module.scala.JacksonModule$.version(JacksonModule.scala:26)
2019-07-15 07:00:00.969 - stdout> 	at com.fasterxml.jackson.module.scala.JacksonModule$class.version(JacksonModule.scala:49)
2019-07-15 07:00:00.969 - stdout> 	at com.fasterxml.jackson.module.scala.DefaultScalaModule.version(DefaultScalaModule.scala:19)
2019-07-15 07:00:00.969 - stdout> 	at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:710)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:82)
2019-07-15 07:00:00.969 - stdout> 	at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)
2019-07-15 07:00:00.969 - stdout> 	... 69 more
2019-07-15 07:00:00.969 - stdout> 
2019-07-15 07:00:01.061 - stderr> 19/07/15 07:00:01 INFO SparkContext: Invoking stop() from shutdown hook
2019-07-15 07:00:01.085 - stderr> 19/07/15 07:00:01 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
2019-07-15 07:00:01.098 - stderr> 19/07/15 07:00:01 INFO MemoryStore: MemoryStore cleared
2019-07-15 07:00:01.099 - stderr> 19/07/15 07:00:01 INFO BlockManager: BlockManager stopped
2019-07-15 07:00:01.108 - stderr> 19/07/15 07:00:01 INFO BlockManagerMaster: BlockManagerMaster stopped
2019-07-15 07:00:01.114 - stderr> 19/07/15 07:00:01 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
2019-07-15 07:00:01.121 - stderr> 19/07/15 07:00:01 INFO SparkContext: Successfully stopped SparkContext
2019-07-15 07:00:01.122 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Shutdown hook called
2019-07-15 07:00:01.125 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Deleting directory /tmp/spark-f56344ee-1451-4f48-8a62-e345eec2578c/pyspark-9a5285c5-44c5-4eb8-a125-1aa816dfce6f
2019-07-15 07:00:01.129 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Deleting directory /tmp/spark-f56344ee-1451-4f48-8a62-e345eec2578c
2019-07-15 07:00:01.133 - stderr> 19/07/15 07:00:01 INFO ShutdownHookManager: Deleting directory /tmp/spark-b3f74184-d79d-4ec7-a6e8-740e47e6ad41
           
	at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:528)
	at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:527)
	at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
	at org.scalatest.Assertions.fail(Assertions.scala:1089)
	at org.scalatest.Assertions.fail$(Assertions.scala:1085)
	at org.scalatest.FunSuite.fail(FunSuite.scala:1560)
	at org.apache.spark.sql.hive.SparkSubmitTestUtils.runSparkSubmit(SparkSubmitTestUtils.scala:94)
	at org.apache.spark.sql.hive.SparkSubmitTestUtils.runSparkSubmit$(SparkSubmitTestUtils.scala:41)
	at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite.runSparkSubmit(HiveExternalCatalogVersionsSuite.scala:47)
	at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite.$anonfun$beforeAll$1(HiveExternalCatalogVersionsSuite.scala:198)
	at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite.$anonfun$beforeAll$1$adapted(HiveExternalCatalogVersionsSuite.scala:181)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite.beforeAll(HiveExternalCatalogVersionsSuite.scala:181)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:56)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:507)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)