Details for spark-master-test-sbt-hadoop-2.6-ubuntu-test build #1264

View on Jenkins

Duration
601 minutes
Start time
2018-10-20 10:05:52 ()
Time in build queue
514 minutes
Commit
fc9ba9dcc6ad47fbd05f093b94e7e13580000d5f
Executor
research-jenkins-worker-07
Status
ABORTED

Failed tests

org.apache.spark.BarrierStageOnSubmittedSuite: submit a barrier ShuffleMapStage that requires more slots than current total under local-cluster mode 5676 ms
org.apache.spark.FailureSuite: failure in a single-stage job 1 ms
org.apache.spark.FailureSuite: failure in a two-stage job 0 ms
org.apache.spark.FailureSuite: failure in a map stage 0 ms
org.apache.spark.FailureSuite: failure because task results are not serializable 1 ms
org.apache.spark.FailureSuite: failure because task closure is not serializable 0 ms
org.apache.spark.FailureSuite: managed memory leak error should not mask other failures (SPARK-9266 1 ms
org.apache.spark.FailureSuite: last failure cause is sent back to driver 1 ms
org.apache.spark.FailureSuite: failure cause stacktrace is sent back to driver if exception is not serializable 0 ms
org.apache.spark.FailureSuite: failure cause stacktrace is sent back to driver if exception is not deserializable 0 ms
org.apache.spark.FailureSuite: failure in tasks in a submitMapStage 1 ms
org.apache.spark.FailureSuite: failure because cached RDD partitions are missing from DiskStore (SPARK-15736) 1 ms
org.apache.spark.FailureSuite: SPARK-16304: Link error should not crash executor 1 ms
org.apache.spark.MapOutputTrackerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 0 ms
org.apache.spark.StatusTrackerSuite: basic status API usage 1 ms
org.apache.spark.StatusTrackerSuite: getJobIdsForGroup() 0 ms
org.apache.spark.StatusTrackerSuite: getJobIdsForGroup() with takeAsync() 0 ms
org.apache.spark.StatusTrackerSuite: getJobIdsForGroup() with takeAsync() across multiple partitions 0 ms
org.apache.spark.deploy.history.HistoryServerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 0 ms
org.apache.spark.input.WholeTextFileRecordReaderSuite: (It is not a test it is a sbt.testing.SuiteSelector) 10 ms
org.apache.spark.metrics.InputOutputMetricsSuite: (It is not a test it is a sbt.testing.SuiteSelector) 1661 ms
org.apache.spark.rdd.JdbcRDDSuite: basic functionality 1 ms
org.apache.spark.rdd.JdbcRDDSuite: large id overflow 4 ms
org.apache.spark.rdd.LocalCheckpointSuite: (It is not a test it is a sbt.testing.SuiteSelector) 11 ms
org.apache.spark.rdd.RDDOperationScopeSuite: (It is not a test it is a sbt.testing.SuiteSelector) 9 ms
org.apache.spark.rdd.ZippedPartitionsSuite: (It is not a test it is a sbt.testing.SuiteSelector) 10 ms
org.apache.spark.scheduler.EventLoggingListenerSuite: End-to-end event logging 1 ms
org.apache.spark.scheduler.EventLoggingListenerSuite: End-to-end event logging with compression 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: TaskSet with no preferences 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: multiple offers with no preferences 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: skip unsatisfiable locality levels 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: basic delay scheduling 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: we do not need to delay scheduling when we only have noPref tasks in the queue 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: delay scheduling with fallback 1943 ms
org.apache.spark.scheduler.TaskSetManagerSuite: delay scheduling with failed hosts 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: task result lost 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: repeated failures lead to task set abortion 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: executors should be blacklisted after task failure, in spite of locality preferences 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: new executors get added and lost 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Executors exit for reason unrelated to currently running tasks 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: test RACK_LOCAL tasks 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: do not emit warning when serialized task is small 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: emit warning when serialized task is large 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Not serializable exception thrown if the task cannot be serialized 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: abort the job if total size of results is too large 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: [SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: [SPARK-22074] Task killed by other attempt task should not be resubmitted 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: speculative and noPref task should be scheduled after node-local 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: node-local tasks should be scheduled right away when there are only node-local and no-preference tasks 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Ensure TaskSetManager is usable after addition of levels 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Kill other task attempts when one attempt belonging to the same task succeeds 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: Killing speculative tasks does not count towards aborting the taskset 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-19868: DagScheduler only notified of taskEnd when state is ready 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: update application blacklist for shuffle-fetch 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: update blacklist before adding pending task to avoid race condition 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-21563 context's added jars shouldn't change mid-TaskSet 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: [SPARK-24677] Avoid NoSuchElementException from MedianHeap 1 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-24755 Executor loss can cause task to not be resubmitted 0 ms
org.apache.spark.scheduler.TaskSetManagerSuite: SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success 0 ms
org.apache.spark.serializer.KryoSerializerAutoResetDisabledSuite: (It is not a test it is a sbt.testing.SuiteSelector) 8 ms
org.apache.spark.serializer.KryoSerializerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 0 ms
org.apache.spark.serializer.UnsafeKryoSerializerSuite: (It is not a test it is a sbt.testing.SuiteSelector) 12 ms
org.apache.spark.ui.UISeleniumSuite: (It is not a test it is a sbt.testing.SuiteSelector) 0 ms
org.apache.spark.ui.UIUtilsSuite: (It is not a test it is a sbt.testing.SuiteSelector) 0 ms
org.apache.spark.util.collection.OpenHashSetSuite: (It is not a test it is a sbt.testing.SuiteSelector) 0 ms
org.apache.spark.util.collection.SorterSuite: (It is not a test it is a sbt.testing.SuiteSelector) 0 ms
org.apache.spark.ml.tuning.CrossValidatorSuite: CrossValidator expose sub models 6068 ms
org.apache.spark.ml.tuning.CrossValidatorSuite: read/write: CrossValidator with nested estimator 3 ms
org.apache.spark.ml.tuning.CrossValidatorSuite: read/write: Persistence of nested estimator works if parent directory changes 3 ms
org.apache.spark.ml.tuning.CrossValidatorSuite: read/write: CrossValidator with complex estimator 1 ms
org.apache.spark.ml.tuning.CrossValidatorSuite: read/write: CrossValidatorModel 2 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.