Details for spark-master-test-maven-hadoop-3.2 build #298

View on Jenkins

Duration
335 minutes
Start time
2019-09-03 18:10:07 ()
Commit
2856398de9d35d136758bbc11afa4d1dc0c98830
Executor
amp-jenkins-worker-05
Status
ABORTED

Failed tests

org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table 33 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as external table 73 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table using CTAS 63 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table 33 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as external table 75 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table using CTAS 66 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table 31 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as external table 74 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table using CTAS 72 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table 30 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as external table 67 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table using CTAS 75 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table 31 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as external table 73 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table using CTAS 72 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the bucketed data source table SerDe correctly 30 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the partitioned bucketed data source table SerDe correctly 562 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Append 758 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - ErrorIfExists 65 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Ignore 61 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - simple queries 291 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Overwrite 63 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append 62 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values 64 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists 74 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Ignore 79 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite 20 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append 18 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Ignore 331 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries 24 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type 3 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite 25 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append 24 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values 19 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns 17 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - ErrorIfExists 41 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Ignore 43 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: load() - with directory of unpartitioned data in nested subdirs 65 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - unpartitioned data 90 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - partitioned data with schema inference 75 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-9735 Partition column type casting 132 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table 62 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Locality support for FileScanRDD 38 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-16975: Partitioned table with the column having '_' should be read correctly 43 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data 49 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7868: _temporary directories should be ignored 207 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8014: Avoid scanning output directory when SaveMode isn't SaveMode.Append 84 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8079: Avoid NPE thrown from BaseWriterContainer.abortJob 55 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8604: Parquet data source should write summary file while doing appending 47 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-10334 Projections and filters should be kept in physical plan 41 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-11500: Not deterministic order of columns when using merging schemas. 100 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13537: Fix readBytes in VectorizedPlainValuesReader 51 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13543: Support for specifying compression codec for Parquet via option() 92 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8406: Avoids name collision while writing files 29 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.