Details for spark-master-test-maven-hadoop-3.2 build #352

View on Jenkins

Duration
274 minutes
Start time
2019-09-17 12:41:05 ()
Commit
34915b22ab174a45c563ccdcd5035299f3ccc56c
Executor
amp-jenkins-worker-03
Status
ABORTED

Failed tests

org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table 54 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table with a user specified schema 440 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table with a user specified schema with a subset of fields 449 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: resolve shortened provider names 77 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: drop table 45 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: check change without refresh 159 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: drop, change, recreate 104 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: invalidate cache and reload 535 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS 97 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS with IF NOT EXISTS 77 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS a managed table 71 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable(CTAS) using append and insertInto when the target table is Hive serde 29 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-5839 HiveMetastoreCatalog does not recognize table aliases of data source tables. 54 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: save table 105 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: create external table 106 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: scan a parquet table created through a CTAS statement 78 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Pre insert nullability check (ArrayType) 79 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Pre insert nullability check (MapType) 80 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Saving partitionBy columns information 187 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Saving information for sortBy and bucketBy columns 48 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: insert into a table 41 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: append table using different formats 50 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: append a table using the same formats but different names 50 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: append a table with file source V2 provider using the v1 file format 49 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: append a table with v1 file format provider using file source V2 format 43 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-8156:create table to specific database by 'use dbname' 303 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: skip hive metadata on table creation 76 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS: persisted partitioned data source table 99 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS: persisted bucketed data source table 92 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS: persisted partitioned bucketed data source table 91 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: the column order doesn't matter 57 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: mismatch column names 40 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: too many columns 42 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable - source and target are the same table 47 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: insertInto - source and target are the same table 46 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: less columns 44 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-15025: create datasource table with path with select 82 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-15269 external data source table creation 60 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: should keep data source entries in table properties when debug mode is on 2 ms
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Infer schema for Hive serde tables 491 ms
org.apache.spark.sql.hive.UDFSuite: UDF case insensitive 715 ms
org.apache.spark.sql.hive.UDFSuite: temporary function: create and drop 65 ms
org.apache.spark.sql.hive.UDFSuite: permanent function: create and drop without specifying db name 95 ms
org.apache.spark.sql.hive.UDFSuite: permanent function: create and drop with a db name 84 ms
org.apache.spark.sql.hive.UDFSuite: permanent function: create and drop a function in another db 416 ms
org.apache.spark.sql.hive.execution.HiveTableScanSuite: Verify SQLConf HIVE_METASTORE_PARTITION_PRUNING 1186 ms
org.apache.spark.sql.hive.execution.HiveTableScanSuite: SPARK-16926: number of table and partition columns match for new partitioned table 1 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_multipartitioning.q (deterministic) 1 159 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_multipartitioning.q (deterministic) 3 212 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_multipartitioning.q (deterministic) 4 165 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_multipartitioning.q (deterministic) 5 169 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_multipartitioning.q (deterministic) 6 249 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_navfn.q (deterministic) 191 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_ntile.q (deterministic) 140 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_udaf.q (deterministic) 272 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_windowspec.q (deterministic) 600 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_rank.q (deterministic) 1 176 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_rank.q (deterministic) 2 304 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_rank.q (deterministic) 3 215 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing_rank.q (deterministic) 4 213 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 1. testWindowing 101 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 2. testGroupByWithPartitioning 267 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 3. testGroupByHavingWithSWQ 235 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 4. testCount 94 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 5. testCountWithWindowingUDAF 127 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 6. testCountInSubQ 143 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 8. testMixedCaseAlias 86 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 9. testHavingWithWindowingNoGBY 116 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 10. testHavingWithWindowingCondRankNoGBY 96 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 11. testFirstLast 100 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 12. testFirstLastWithWhere 117 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 13. testSumWindow 98 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 14. testNoSortClause 76 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 16. testMultipleWindows 162 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 17. testCountStar 120 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 18. testUDAFs 133 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 19. testUDAFsWithGBY 239 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 21. testDISTs 193 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 24. testLateralViews 151 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 26. testGroupByHavingWithSWQAndAlias 128 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 27. testMultipleRangeWindows 76 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 28. testPartOrderInUDAFInvoke 73 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 29. testPartOrderInWdwDef 70 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 30. testDefaultPartitioningSpecRules 68 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 36. testRankWithPartitioning 74 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 37. testPartitioningVariousForms 136 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 38. testPartitioningVariousForms2 120 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 39. testUDFOnOrderCols 211 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 40. testNoBetweenForRows 144 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 41. testNoBetweenForRange 101 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 42. testUnboundedFollowingForRows 100 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 43. testUnboundedFollowingForRange 92 ms
org.apache.spark.sql.hive.execution.HiveWindowFunctionQuerySuite: windowing.q -- 44. testOverNoPartitionSingleAggregate 116 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types 80 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - Overwrite 115 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - Append 78 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - ErrorIfExists 66 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - Ignore 76 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - simple queries 133 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Overwrite 83 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Append 92 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values 107 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists 68 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Ignore 119 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite 22 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append 28 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Ignore 475 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries 22 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type 2 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite 24 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append 31 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values 17 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns 16 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - ErrorIfExists 45 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Ignore 41 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: load() - with directory of unpartitioned data in nested subdirs 111 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: Hadoop style globbing - unpartitioned data 98 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: Hadoop style globbing - partitioned data with schema inference 79 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-9735 Partition column type casting 111 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table 41 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: Locality support for FileScanRDD 65 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-16975: Partitioned table with the column having '_' should be read correctly 49 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data 46 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-12218: 'Not' is included in ORC filter pushdown 106 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-13543: Support for specifying compression codec for ORC via option() 633 ms
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: Default compression codec is snappy for ORC compression 55 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: test all data types 60 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - non-partitioned table - Overwrite 274 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - non-partitioned table - Append 73 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - non-partitioned table - ErrorIfExists 90 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - non-partitioned table - Ignore 78 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - simple queries 171 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Overwrite 100 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Append 78 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values 71 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists 99 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Ignore 77 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite 17 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append 19 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Ignore 496 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries 29 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type 2 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite 34 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append 41 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values 23 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns 19 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - ErrorIfExists 64 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Ignore 55 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: load() - with directory of unpartitioned data in nested subdirs 89 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: Hadoop style globbing - unpartitioned data 96 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: Hadoop style globbing - partitioned data with schema inference 101 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-9735 Partition column type casting 163 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table 62 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: Locality support for FileScanRDD 67 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-16975: Partitioned table with the column having '_' should be read correctly 67 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data 77 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-9894: save complex types to JSON 66 ms
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-10196: save decimal type to JSON 68 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.