Details for spark-master-test-maven-hadoop-3.2 build #383

View on Jenkins

Duration
335 minutes
Start time
2019-09-23 16:11:08 ()
Commit
0c40b94ae57b994cfc5c848baa0fc2629ba378c3
Executor
amp-jenkins-worker-03
Status
ABORTED

Failed tests

org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table 45 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as external table 117 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table using CTAS 83 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table 38 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as external table 117 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table using CTAS 90 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table 48 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as external table 88 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table using CTAS 87 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table 53 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as external table 103 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table using CTAS 93 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table 50 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as external table 100 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table using CTAS 83 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the bucketed data source table SerDe correctly 36 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the partitioned bucketed data source table SerDe correctly 775 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append 1342 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values 390 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists 148 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Ignore 117 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite 23 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append 22 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Ignore 613 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries 42 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type 1 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite 41 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append 39 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values 22 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns 24 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - ErrorIfExists 58 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Ignore 61 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: load() - with directory of unpartitioned data in nested subdirs 101 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - unpartitioned data 101 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - partitioned data with schema inference 80 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-9735 Partition column type casting 175 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table 77 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Locality support for FileScanRDD 67 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-16975: Partitioned table with the column having '_' should be read correctly 50 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data 82 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7868: _temporary directories should be ignored 172 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8014: Avoid scanning output directory when SaveMode isn't SaveMode.Append 105 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8079: Avoid NPE thrown from BaseWriterContainer.abortJob 51 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8604: Parquet data source should write summary file while doing appending 64 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-10334 Projections and filters should be kept in physical plan 59 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-11500: Not deterministic order of columns when using merging schemas. 127 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13537: Fix readBytes in VectorizedPlainValuesReader 80 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13543: Support for specifying compression codec for Parquet via option() 98 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8406: Avoids name collision while writing files 70 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.