Details for spark-master-test-maven-hadoop-2.7 build #4831

View on Jenkins

Duration
231 minutes
Start time
2018-05-16 18:08:21 ()
Commit
6fb7d6c4f71be0007942f7d1fc3099f1bcf8c52b
Executor
amp-jenkins-worker-06
Status
FAILURE

Failed tests

org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: assign from earliest offsets (failOnDataLoss: true) 988 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: assign from specific offsets (failOnDataLoss: true) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by name from latest offsets (failOnDataLoss: true) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by name from earliest offsets (failOnDataLoss: true) 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by name from specific offsets (failOnDataLoss: true) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by pattern from latest offsets (failOnDataLoss: true) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by pattern from earliest offsets (failOnDataLoss: true) 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by pattern from specific offsets (failOnDataLoss: true) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: assign from latest offsets (failOnDataLoss: false) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: assign from earliest offsets (failOnDataLoss: false) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: assign from specific offsets (failOnDataLoss: false) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by name from latest offsets (failOnDataLoss: false) 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by name from earliest offsets (failOnDataLoss: false) 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by name from specific offsets (failOnDataLoss: false) 2 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by pattern from latest offsets (failOnDataLoss: false) 3 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by pattern from earliest offsets (failOnDataLoss: false) 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by pattern from specific offsets (failOnDataLoss: false) 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: Kafka column types 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: (de)serialization of initial offsets 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: maxOffsetsPerTrigger 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: input row metrics 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: subscribing topic by pattern with topic deletions 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: ensure that initial offset are written with an extra byte in the beginning (SPARK-19517) 6 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: deserialization of initial offset written by Spark 2.1.0 (SPARK-19517) 5 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: deserialization of initial offset written by future version 5 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: KafkaSource with watermark 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: delete a topic when a Spark job is running 1 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: SPARK-22956: currentPartitionOffsets should be set when no new data comes in 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: ensure stream-stream self-join generates only one offset in log and correct metrics 0 ms
org.apache.spark.sql.kafka010.KafkaMicroBatchV1SourceSuite: V1 Source is used when disabled through SQLConf 1 ms

Test time report

Right click on the visualization to go back up a level. Click on a node to expand it. Hover over a node to see the combined duration of tests under that node.