Details for spark-master-test-maven-hadoop-2.7-ubuntu-testing build #1786

View on Jenkins

Duration
61 minutes
Start time
2019-09-19 07:26:58 ()
Commit
3bf43fb60d6f8aba23eaa1e43405024725b50f22
Executor
amp-jenkins-staging-worker-02
Status
FAILURE

Failed tests

org.apache.spark.FileSuite: text files 68 ms
org.apache.spark.FileSuite: text files (compressed) 72 ms
org.apache.spark.FileSuite: text files do not allow null rows 70 ms
org.apache.spark.FileSuite: SequenceFiles 81 ms
org.apache.spark.FileSuite: SequenceFile (compressed) 72 ms
org.apache.spark.FileSuite: SequenceFile with writable key 73 ms
org.apache.spark.FileSuite: SequenceFile with writable value 86 ms
org.apache.spark.FileSuite: SequenceFile with writable key and value 73 ms
org.apache.spark.FileSuite: implicit conversions in reading SequenceFiles 80 ms
org.apache.spark.FileSuite: object files of ints 101 ms
org.apache.spark.FileSuite: object files of complex types 80 ms
org.apache.spark.FileSuite: object files of classes from a JAR 219 ms
org.apache.spark.FileSuite: write SequenceFile using new Hadoop API 59 ms
org.apache.spark.FileSuite: read SequenceFile using new Hadoop API 59 ms
org.apache.spark.FileSuite: binary file input as byte array 64 ms
org.apache.spark.FileSuite: portabledatastream caching tests 76 ms
org.apache.spark.FileSuite: portabledatastream persist disk storage 63 ms
org.apache.spark.FileSuite: portabledatastream flatmap tests 82 ms
org.apache.spark.FileSuite: SPARK-22357 test binaryFiles minPartitions 74 ms
org.apache.spark.FileSuite: minimum split size per node and per rack should be less than or equal to maxSplitSize 60 ms
org.apache.spark.FileSuite: fixed record length binary file as byte array 67 ms
org.apache.spark.FileSuite: negative binary record length should raise an exception 72 ms
org.apache.spark.FileSuite: file caching 89 ms
org.apache.spark.FileSuite: prevent user from overwriting the empty directory (old Hadoop API) 63 ms
org.apache.spark.FileSuite: prevent user from overwriting the non-empty directory (old Hadoop API) 57 ms
org.apache.spark.FileSuite: prevent user from overwriting the empty directory (new Hadoop API) 77 ms
org.apache.spark.FileSuite: prevent user from overwriting the non-empty directory (new Hadoop API) 65 ms
org.apache.spark.FileSuite: save Hadoop Dataset through old Hadoop API 93 ms
org.apache.spark.FileSuite: save Hadoop Dataset through new Hadoop API 62 ms
org.apache.spark.FileSuite: Get input files via old Hadoop API 62 ms
org.apache.spark.FileSuite: Get input files via new Hadoop API 59 ms
org.apache.spark.FileSuite: spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD 79 ms
org.apache.spark.FileSuite: spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API) 73 ms
org.apache.spark.FileSuite: spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API) 61 ms
org.apache.spark.FileSuite: spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD 122 ms
org.apache.spark.SparkContextSuite: Comma separated paths for newAPIHadoopFile/wholeTextFiles/binaryFiles (SPARK-7155) 123 ms
org.apache.spark.api.python.PythonRDDSuite: SparkContext's hadoop configuration should be respected in PythonRDD 94 ms
org.apache.spark.deploy.SparkSubmitSuite: includes jars passed through spark.jars.packages and spark.jars.repositories 12409 ms
org.apache.spark.deploy.security.HadoopDelegationTokenManagerSuite: SPARK-29082: do not fail if current user does not have credentials 18 ms
org.apache.spark.input.WholeTextFileInputFormatSuite: for small files minimum split size per node and per rack should be less than or equal to maximum split size. 36 ms
org.apache.spark.input.WholeTextFileRecordReaderSuite: Correctness of WholeTextFileRecordReader. 28 ms
org.apache.spark.input.WholeTextFileRecordReaderSuite: Correctness of WholeTextFileRecordReader with GzipCodec. 36 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics for old hadoop with coalesce 16 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics with cache and coalesce 25 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics for new Hadoop API with coalesce 13 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics when reading text file 12 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics on records read - simple 15 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics on records read - more stages 16 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics on records - New Hadoop API 10 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics on records read with cache 12 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input read/write and shuffle read/write metrics all line up 14 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics with interleaved reads 5 ms
org.apache.spark.metrics.InputOutputMetricsSuite: output metrics on records written 4 ms
org.apache.spark.metrics.InputOutputMetricsSuite: output metrics on records written - new Hadoop API 3 ms
org.apache.spark.metrics.InputOutputMetricsSuite: output metrics when writing text file 13 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics with old CombineFileInputFormat 23 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics with new CombineFileInputFormat 37 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics with old Hadoop API in different thread 25 ms
org.apache.spark.metrics.InputOutputMetricsSuite: input metrics with new Hadoop API in different thread 20 ms
org.apache.spark.rdd.PairRDDFunctionsSuite: zero-partition RDD 32 ms
org.apache.spark.scheduler.OutputCommitCoordinatorIntegrationSuite: exception thrown in OutputCommitter.commitTask() 13 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.