Details for spark-master-test-maven-hadoop-3.2 build #366

View on Jenkins

Duration
335 minutes
Start time
2019-09-20 00:57:56 ()
Commit
76ebf2241a3f2149de13d6c89adcb86325b06004
Executor
amp-jenkins-worker-02
Status
ABORTED

Failed tests

org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table 28 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as external table 78 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table using CTAS 72 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table 26 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as external table 79 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table using CTAS 80 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table 27 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as external table 70 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table using CTAS 62 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table 28 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as external table 80 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table using CTAS 70 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table 27 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as external table 67 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table using CTAS 75 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the bucketed data source table SerDe correctly 27 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the partitioned bucketed data source table SerDe correctly 714 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: test all data types 23780 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Overwrite 308 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Append 67 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - ErrorIfExists 60 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Ignore 95 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - simple queries 145 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Overwrite 90 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append 82 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values 81 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists 81 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Ignore 90 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite 16 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append 17 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Ignore 398 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries 24 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type 3 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite 22 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append 23 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values 17 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns 15 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - ErrorIfExists 33 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Ignore 41 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: load() - with directory of unpartitioned data in nested subdirs 73 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - unpartitioned data 63 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - partitioned data with schema inference 80 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-9735 Partition column type casting 126 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table 61 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Locality support for FileScanRDD 61 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-16975: Partitioned table with the column having '_' should be read correctly 63 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data 69 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7868: _temporary directories should be ignored 113 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8014: Avoid scanning output directory when SaveMode isn't SaveMode.Append 76 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8079: Avoid NPE thrown from BaseWriterContainer.abortJob 51 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8604: Parquet data source should write summary file while doing appending 44 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-10334 Projections and filters should be kept in physical plan 38 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-11500: Not deterministic order of columns when using merging schemas. 89 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13537: Fix readBytes in VectorizedPlainValuesReader 44 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13543: Support for specifying compression codec for Parquet via option() 68 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8406: Avoids name collision while writing files 37 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.