Details for spark-master-test-sbt-hadoop-2.6-ubuntu-test build #1180

View on Jenkins

Duration
600 minutes
Start time
2018-10-01 19:24:26 ()
Commit
a802c69b130b69a35b372ffe1b01289577f6fafb
Executor
research-jenkins-worker-08
Status
ABORTED

Failed tests

org.apache.spark.BarrierStageOnSubmittedSuite: submit a barrier ShuffleMapStage that requires more slots than current total under local-cluster mode 8254 ms
org.apache.spark.ExecutorPluginSuite: testAddMultiplePlugins 1 ms
org.apache.spark.ExecutorPluginSuite: testPluginClassDoesNotExist 0 ms
org.apache.spark.ExecutorPluginSuite: testAddPlugin 1 ms
org.apache.spark.ExecutorPluginSuite: testPluginShutdownWithException 0 ms
org.apache.spark.ExternalShuffleServiceSuite: shuffle serializer 111075 ms
org.apache.spark.ExternalShuffleServiceSuite: zero sized blocks 9471346 ms
org.apache.spark.ExternalShuffleServiceSuite: zero sized blocks without kryo 2 ms
org.apache.spark.ExternalShuffleServiceSuite: shuffle on mutable pairs 0 ms
org.apache.spark.ExternalShuffleServiceSuite: sorting on mutable pairs 1 ms
org.apache.spark.ExternalShuffleServiceSuite: cogroup using mutable pairs 1 ms
org.apache.spark.ExternalShuffleServiceSuite: subtract mutable pairs 0 ms
org.apache.spark.ExternalShuffleServiceSuite: sort with Java non serializable class - Kryo 0 ms
org.apache.spark.ExternalShuffleServiceSuite: sort with Java non serializable class - Java 1 ms
org.apache.spark.ExternalShuffleServiceSuite: shuffle with different compression settings (SPARK-3426) 1 ms
org.apache.spark.ExternalShuffleServiceSuite: [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file 0 ms
org.apache.spark.ExternalShuffleServiceSuite: cannot find its local shuffle file if no execution of the stage and rerun shuffle 0 ms
org.apache.spark.ExternalShuffleServiceSuite: metrics for shuffle without aggregation 1 ms
org.apache.spark.ExternalShuffleServiceSuite: metrics for shuffle with aggregation 1 ms
org.apache.spark.ExternalShuffleServiceSuite: multiple simultaneous attempts for one task (SPARK-8029) 1 ms
org.apache.spark.ExternalShuffleServiceSuite: using external shuffle service 1 ms
org.apache.spark.FailureSuite: failure in a single-stage job 1 ms
org.apache.spark.FailureSuite: failure in a two-stage job 1 ms
org.apache.spark.FailureSuite: failure in a map stage 0 ms
org.apache.spark.FailureSuite: failure because task results are not serializable 1 ms
org.apache.spark.FailureSuite: failure because task closure is not serializable 1 ms
org.apache.spark.FailureSuite: managed memory leak error should not mask other failures (SPARK-9266 0 ms
org.apache.spark.FailureSuite: last failure cause is sent back to driver 1 ms
org.apache.spark.FailureSuite: failure cause stacktrace is sent back to driver if exception is not serializable 1 ms
org.apache.spark.FailureSuite: failure cause stacktrace is sent back to driver if exception is not deserializable 1 ms
org.apache.spark.FailureSuite: failure in tasks in a submitMapStage 1 ms
org.apache.spark.FailureSuite: failure because cached RDD partitions are missing from DiskStore (SPARK-15736) 1 ms
org.apache.spark.FailureSuite: SPARK-16304: Link error should not crash executor 1 ms
org.apache.spark.JavaJdbcRDDSuite: testJavaJdbcRDD 5 ms
org.apache.spark.MapOutputTrackerSuite: remote fetch using broadcast 1 ms
org.apache.spark.SparkConfSuite: creating SparkContext with both master and app name 0 ms
org.apache.spark.SparkConfSuite: SparkContext property overriding 0 ms
org.apache.spark.SparkContextSchedulerCreationSuite: bad-master 2 ms
org.apache.spark.SparkContextSchedulerCreationSuite: local 0 ms
org.apache.spark.SparkContextSchedulerCreationSuite: local-* 0 ms
org.apache.spark.SparkContextSchedulerCreationSuite: local-n 1 ms
org.apache.spark.SparkContextSchedulerCreationSuite: local-*-n-failures 1 ms
org.apache.spark.SparkContextSchedulerCreationSuite: local-n-failures 1 ms
org.apache.spark.SparkContextSchedulerCreationSuite: bad-local-n 1 ms
org.apache.spark.SparkContextSchedulerCreationSuite: bad-local-n-failures 1 ms
org.apache.spark.SparkContextSchedulerCreationSuite: local-default-parallelism 0 ms
org.apache.spark.SparkContextSchedulerCreationSuite: local-cluster 1 ms
org.apache.spark.StatusTrackerSuite: basic status API usage 0 ms
org.apache.spark.StatusTrackerSuite: getJobIdsForGroup() 0 ms
org.apache.spark.StatusTrackerSuite: getJobIdsForGroup() with takeAsync() 0 ms
org.apache.spark.StatusTrackerSuite: getJobIdsForGroup() with takeAsync() across multiple partitions 1 ms
org.apache.spark.UnpersistSuite: unpersist RDD 1 ms
org.apache.spark.deploy.history.HistoryServerSuite: incomplete apps get refreshed 346 ms
org.apache.spark.deploy.history.HistoryServerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 9773 ms
org.apache.spark.deploy.worker.DriverRunnerTest: Kill process finalized with state KILLED 10131 ms
org.apache.spark.deploy.worker.DriverRunnerTest: Finalized with state FINISHED 10075 ms
org.apache.spark.deploy.worker.DriverRunnerTest: Finalized with state FAILED 10090 ms
org.apache.spark.executor.ExecutorSuite: SPARK-19276: Handle FetchFailedExceptions that are hidden by user exceptions 1 ms
org.apache.spark.executor.ExecutorSuite: Executor's worker threads should be UninterruptibleThread 1 ms
org.apache.spark.executor.ExecutorSuite: SPARK-19276: OOMs correctly handled with a FetchFailure 0 ms
org.apache.spark.executor.ExecutorSuite: SPARK-23816: interrupts are not masked by a FetchFailure 1 ms
org.apache.spark.executor.ExecutorSuite: Gracefully handle error in task deserialization 28 ms
org.apache.spark.input.WholeTextFileRecordReaderSuite: (It is not a test it is a sbt.testing.SuiteSelector) 15 ms
org.apache.spark.launcher.SparkLauncherSuite: testInProcessLauncher 10052 ms
org.apache.spark.metrics.InputOutputMetricsSuite: (It is not a test it is a sbt.testing.SuiteSelector) 31 ms
org.apache.spark.rdd.AsyncRDDActionsSuite: (It is not a test it is a sbt.testing.SuiteSelector) 22 ms
org.apache.spark.rdd.JdbcRDDSuite: basic functionality 1 ms
org.apache.spark.rdd.JdbcRDDSuite: large id overflow 3 ms
org.apache.spark.rdd.LocalCheckpointSuite: (It is not a test it is a sbt.testing.SuiteSelector) 15 ms
org.apache.spark.rdd.PartitionPruningRDDSuite: (It is not a test it is a sbt.testing.SuiteSelector) 15 ms
org.apache.spark.rdd.RDDOperationScopeSuite: (It is not a test it is a sbt.testing.SuiteSelector) 20 ms
org.apache.spark.rdd.ZippedPartitionsSuite: (It is not a test it is a sbt.testing.SuiteSelector) 21 ms
org.apache.spark.scheduler.EventLoggingListenerSuite: End-to-end event logging 1 ms
org.apache.spark.scheduler.EventLoggingListenerSuite: End-to-end event logging with compression 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: TaskSet with no preferences 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: multiple offers with no preferences 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: skip unsatisfiable locality levels 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: basic delay scheduling 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: we do not need to delay scheduling when we only have noPref tasks in the queue 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: delay scheduling with fallback 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: delay scheduling with failed hosts 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: task result lost 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: repeated failures lead to task set abortion 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: executors should be blacklisted after task failure, in spite of locality preferences 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: new executors get added and lost 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Executors exit for reason unrelated to currently running tasks 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: test RACK_LOCAL tasks 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: do not emit warning when serialized task is small 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: emit warning when serialized task is large 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Not serializable exception thrown if the task cannot be serialized 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: abort the job if total size of results is too large 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: [SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: [SPARK-22074] Task killed by other attempt task should not be resubmitted 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: speculative and noPref task should be scheduled after node-local 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: node-local tasks should be scheduled right away when there are only node-local and no-preference tasks 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Ensure TaskSetManager is usable after addition of levels 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Kill other task attempts when one attempt belonging to the same task succeeds 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Killing speculative tasks does not count towards aborting the taskset 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-19868: DagScheduler only notified of taskEnd when state is ready 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: update application blacklist for shuffle-fetch 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: update blacklist before adding pending task to avoid race condition 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-21563 context's added jars shouldn't change mid-TaskSet 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: [SPARK-24677] Avoid NoSuchElementException from MedianHeap 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-24755 Executor loss can cause task to not be resubmitted 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success 1 ms
org.apache.spark.security.CryptoStreamUtilsSuite: encryption key propagation to executors 540 ms
org.apache.spark.serializer.KryoSerializerAutoResetDisabledSuite: (It is not a test it is a sbt.testing.SuiteSelector) 15 ms
org.apache.spark.serializer.KryoSerializerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 16 ms
org.apache.spark.serializer.UnsafeKryoSerializerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 18 ms
org.apache.spark.ui.UISeleniumSuite: effects of unpersist() / persist() should be reflected 1 ms
org.apache.spark.ui.UISeleniumSuite: failed stages should not appear to be active 1 ms
org.apache.spark.ui.UISeleniumSuite: killEnabled should properly control kill button display 1 ms
org.apache.spark.ui.UISeleniumSuite: jobs page should not display job group name unless some job was submitted in a job group 0 ms
org.apache.spark.ui.UISeleniumSuite: job progress bars should handle stage / task failures 1 ms
org.apache.spark.ui.UISeleniumSuite: job details page should display useful information for stages that haven't started 1 ms
org.apache.spark.ui.UISeleniumSuite: job progress bars / cells reflect skipped stages / tasks 1 ms
org.apache.spark.ui.UISeleniumSuite: stages that aren't run appear as 'skipped stages' after a job finishes 0 ms
org.apache.spark.ui.UISeleniumSuite: jobs with stages that are skipped should show correct link descriptions on all jobs page 1 ms
org.apache.spark.ui.UISeleniumSuite: attaching and detaching a new tab 1 ms
org.apache.spark.ui.UISeleniumSuite: kill stage POST/GET response is correct 0 ms
org.apache.spark.ui.UISeleniumSuite: kill job POST/GET response is correct 1 ms
org.apache.spark.ui.UISeleniumSuite: stage & job retention 1 ms
org.apache.spark.ui.UISeleniumSuite: live UI json application list 1 ms
org.apache.spark.ui.UISeleniumSuite: job stages should have expected dotfile under DAG visualization 1 ms
org.apache.spark.ui.UISeleniumSuite: stages page should show skipped stages 1 ms
org.apache.spark.utils.PeriodicRDDCheckpointerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 14 ms
test.org.apache.spark.Java8RDDAPISuite: leftOuterJoin 1 ms
test.org.apache.spark.Java8RDDAPISuite: foldReduce 0 ms
test.org.apache.spark.Java8RDDAPISuite: mapsFromPairsToPairs 0 ms
test.org.apache.spark.Java8RDDAPISuite: flatMap 0 ms
test.org.apache.spark.Java8RDDAPISuite: foreach 1 ms
test.org.apache.spark.Java8RDDAPISuite: map 1 ms
test.org.apache.spark.Java8RDDAPISuite: zip 1 ms
test.org.apache.spark.Java8RDDAPISuite: keyBy 1 ms
test.org.apache.spark.Java8RDDAPISuite: groupBy 1 ms
test.org.apache.spark.Java8RDDAPISuite: mapPartitions 1 ms
test.org.apache.spark.Java8RDDAPISuite: foldByKey 0 ms
test.org.apache.spark.Java8RDDAPISuite: mapOnPairRDD 0 ms
test.org.apache.spark.Java8RDDAPISuite: sequenceFile 1 ms
test.org.apache.spark.Java8RDDAPISuite: collectPartitions 0 ms
test.org.apache.spark.Java8RDDAPISuite: reduceByKey 0 ms
test.org.apache.spark.Java8RDDAPISuite: foreachWithAnonymousClass 1 ms
test.org.apache.spark.Java8RDDAPISuite: collectAsMapWithIntArrayValues 0 ms
test.org.apache.spark.Java8RDDAPISuite: zipPartitions 0 ms
test.org.apache.spark.JavaAPISuite: groupByOnPairRDD 1 ms
test.org.apache.spark.JavaAPISuite: binaryFilesCaching 0 ms
test.org.apache.spark.JavaAPISuite: sparkContextUnion 0 ms
test.org.apache.spark.JavaAPISuite: checkpointAndComputation 0 ms
test.org.apache.spark.JavaAPISuite: leftOuterJoin 1 ms
test.org.apache.spark.JavaAPISuite: keyByOnPairRDD 0 ms
test.org.apache.spark.JavaAPISuite: getNumPartitions 1 ms
test.org.apache.spark.JavaAPISuite: wholeTextFiles 0 ms
test.org.apache.spark.JavaAPISuite: binaryFiles 0 ms
test.org.apache.spark.JavaAPISuite: foldReduce 1 ms
test.org.apache.spark.JavaAPISuite: writeWithNewAPIHadoopFile 0 ms
test.org.apache.spark.JavaAPISuite: hadoopFile 0 ms
test.org.apache.spark.JavaAPISuite: lookup 1 ms
test.org.apache.spark.JavaAPISuite: countAsync 0 ms
test.org.apache.spark.JavaAPISuite: textFiles 0 ms
test.org.apache.spark.JavaAPISuite: binaryRecords 0 ms
test.org.apache.spark.JavaAPISuite: toLocalIterator 0 ms
test.org.apache.spark.JavaAPISuite: repartitionAndSortWithinPartitions 1 ms
test.org.apache.spark.JavaAPISuite: reduce 0 ms
test.org.apache.spark.JavaAPISuite: sample 0 ms
test.org.apache.spark.JavaAPISuite: sortBy 1 ms
test.org.apache.spark.JavaAPISuite: mapsFromPairsToPairs 0 ms
test.org.apache.spark.JavaAPISuite: flatMap 0 ms
test.org.apache.spark.JavaAPISuite: cogroup3 1 ms
test.org.apache.spark.JavaAPISuite: cogroup4 0 ms
test.org.apache.spark.JavaAPISuite: randomSplit 0 ms
test.org.apache.spark.JavaAPISuite: persist 0 ms
test.org.apache.spark.JavaAPISuite: foreach 0 ms
test.org.apache.spark.JavaAPISuite: hadoopFileCompressed 1 ms
test.org.apache.spark.JavaAPISuite: accumulators 0 ms
test.org.apache.spark.JavaAPISuite: textFilesCompressed 0 ms
test.org.apache.spark.JavaAPISuite: testAsyncActionCancellation 0 ms
test.org.apache.spark.JavaAPISuite: checkpointAndRestore 0 ms
test.org.apache.spark.JavaAPISuite: sortByKey 0 ms
test.org.apache.spark.JavaAPISuite: aggregateByKey 0 ms
test.org.apache.spark.JavaAPISuite: map 1 ms
test.org.apache.spark.JavaAPISuite: max 0 ms
test.org.apache.spark.JavaAPISuite: min 0 ms
test.org.apache.spark.JavaAPISuite: top 1 ms
test.org.apache.spark.JavaAPISuite: zip 0 ms
test.org.apache.spark.JavaAPISuite: fold 0 ms
test.org.apache.spark.JavaAPISuite: glom 0 ms
test.org.apache.spark.JavaAPISuite: take 0 ms
test.org.apache.spark.JavaAPISuite: javaDoubleRDDHistoGram 1 ms
test.org.apache.spark.JavaAPISuite: collectUnderlyingScalaRDD 0 ms
test.org.apache.spark.JavaAPISuite: keyBy 0 ms
test.org.apache.spark.JavaAPISuite: mapPartitionsWithIndex 0 ms
test.org.apache.spark.JavaAPISuite: sampleByKey 0 ms
test.org.apache.spark.JavaAPISuite: intersection 1 ms
test.org.apache.spark.JavaAPISuite: aggregate 0 ms
test.org.apache.spark.JavaAPISuite: cartesian 1 ms
test.org.apache.spark.JavaAPISuite: countApproxDistinctByKey 0 ms
test.org.apache.spark.JavaAPISuite: readWithNewAPIHadoopFile 0 ms
test.org.apache.spark.JavaAPISuite: testRegisterKryoClasses 1 ms
test.org.apache.spark.JavaAPISuite: groupBy 0 ms
test.org.apache.spark.JavaAPISuite: sampleByKeyExact 0 ms
test.org.apache.spark.JavaAPISuite: mapPartitions 0 ms
test.org.apache.spark.JavaAPISuite: takeOrdered 0 ms
test.org.apache.spark.JavaAPISuite: foldByKey 1 ms
test.org.apache.spark.JavaAPISuite: objectFilesOfInts 0 ms
test.org.apache.spark.JavaAPISuite: treeAggregate 1 ms
test.org.apache.spark.JavaAPISuite: testGetPersistentRDDs 0 ms
test.org.apache.spark.JavaAPISuite: approximateResults 0 ms
test.org.apache.spark.JavaAPISuite: treeReduce 0 ms
test.org.apache.spark.JavaAPISuite: collectAsMapAndSerialize 0 ms
test.org.apache.spark.JavaAPISuite: countApproxDistinct 1 ms
test.org.apache.spark.JavaAPISuite: javaDoubleRDD 0 ms
test.org.apache.spark.JavaAPISuite: mapOnPairRDD 1 ms
test.org.apache.spark.JavaAPISuite: testAsyncActionErrorWrapping 0 ms
test.org.apache.spark.JavaAPISuite: naturalMax 0 ms
test.org.apache.spark.JavaAPISuite: naturalMin 0 ms
test.org.apache.spark.JavaAPISuite: sequenceFile 0 ms
test.org.apache.spark.JavaAPISuite: collectPartitions 1 ms
test.org.apache.spark.JavaAPISuite: cogroup 0 ms
test.org.apache.spark.JavaAPISuite: reduceByKey 1 ms
test.org.apache.spark.JavaAPISuite: repartition 0 ms
test.org.apache.spark.JavaAPISuite: iterator 0 ms
test.org.apache.spark.JavaAPISuite: emptyRDD 0 ms
test.org.apache.spark.JavaAPISuite: zipWithIndex 0 ms
test.org.apache.spark.JavaAPISuite: foreachPartition 1 ms
test.org.apache.spark.JavaAPISuite: combineByKey 0 ms
test.org.apache.spark.JavaAPISuite: takeAsync 1 ms
test.org.apache.spark.JavaAPISuite: collectAsMapWithIntArrayValues 1 ms
test.org.apache.spark.JavaAPISuite: objectFilesOfComplexTypes 0 ms
test.org.apache.spark.JavaAPISuite: zipWithUniqueId 0 ms
test.org.apache.spark.JavaAPISuite: collectAsync 0 ms
test.org.apache.spark.JavaAPISuite: foreachAsync 1 ms
test.org.apache.spark.JavaAPISuite: zipPartitions 0 ms
test.org.apache.spark.JavaAPISuite: reduceOnJavaDoubleRDD 1 ms
test.org.apache.spark.JavaAPISuite: isEmpty 0 ms
test.org.apache.spark.JavaSparkContextSuite: javaSparkContext 1 ms
test.org.apache.spark.JavaSparkContextSuite: scalaSparkContext 0 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.