Details for spark-master-test-maven-hadoop-2.7-ubuntu-testing build #1797

View on Jenkins

Duration
335 minutes
Start time
2019-09-21 05:31:15 ()
Commit
a9ae262cf279bc607cb842204c717257c259d82b
Executor
amp-jenkins-staging-worker-02
Status
ABORTED

Failed tests

org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table 46 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as external table 83 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned parquet relation into metastore as managed table using CTAS 65 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table 29 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as external table 72 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat relation into metastore as managed table using CTAS 78 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table 31 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as external table 62 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned orc relation into metastore as managed table using CTAS 73 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table 28 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as external table 69 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.hive.orc relation into metastore as managed table using CTAS 69 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table 42 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as external table 63 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: Persist non-partitioned org.apache.spark.sql.execution.datasources.orc.OrcFileFormat relation into metastore as managed table using CTAS 61 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the bucketed data source table SerDe correctly 25 ms
org.apache.spark.sql.hive.DataSourceWithHiveMetastoreCatalogSuite: SPARK-27592 set the partitioned bucketed data source table SerDe correctly 481 ms
org.apache.spark.sql.hive.execution.PruningSuite: Partition pruning - pruning with both column key and partition key - query test 7505 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: test all data types 32 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Overwrite 120 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Append 61 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - ErrorIfExists 60 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - non-partitioned table - Ignore 68 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - simple queries 131 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Overwrite 99 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append 79 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values 58 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists 72 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - Ignore 55 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite 19 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append 15 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Ignore 235 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries 37 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type 2 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite 31 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append 29 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values 18 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns 18 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - ErrorIfExists 40 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Ignore 41 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: load() - with directory of unpartitioned data in nested subdirs 54 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - unpartitioned data 89 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Hadoop style globbing - partitioned data with schema inference 78 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-9735 Partition column type casting 124 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table 57 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: Locality support for FileScanRDD 39 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-16975: Partitioned table with the column having '_' should be read correctly 42 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data 45 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-7868: _temporary directories should be ignored 96 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8014: Avoid scanning output directory when SaveMode isn't SaveMode.Append 65 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8079: Avoid NPE thrown from BaseWriterContainer.abortJob 36 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8604: Parquet data source should write summary file while doing appending 43 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-10334 Projections and filters should be kept in physical plan 50 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-11500: Not deterministic order of columns when using merging schemas. 88 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13537: Fix readBytes in VectorizedPlainValuesReader 33 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-13543: Support for specifying compression codec for Parquet via option() 61 ms
org.apache.spark.sql.sources.ParquetHadoopFsRelationSuite: SPARK-8406: Avoids name collision while writing files 38 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.