org.apache.spark.sql.kafka010.KafkaSourceStressSuite: stress test with multiple topics and partitions
|
2921 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table
|
38 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table with a user specified schema
|
164 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table with a user specified schema with a subset of fields
|
143 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: resolve shortened provider names
|
46 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: drop table
|
33 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: check change without refresh
|
103 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: drop, change, recreate
|
58 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: invalidate cache and reload
|
169 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS
|
74 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS with IF NOT EXISTS
|
57 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS a managed table
|
37 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable(CTAS) using append and insertInto when the target table is Hive serde
|
16 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-5839 HiveMetastoreCatalog does not recognize table aliases of data source tables.
|
24 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: save table
|
55 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: create external table
|
61 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: scan a parquet table created through a CTAS statement
|
29 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Pre insert nullability check (ArrayType)
|
42 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Pre insert nullability check (MapType)
|
46 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Saving partitionBy columns information
|
47 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Saving information for sortBy and bucketBy columns
|
150 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: insert into a table
|
29 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: append table using different formats
|
28 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: append a table using the same formats but different names
|
28 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-8156:create table to specific database by 'use dbname'
|
111 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: skip hive metadata on table creation
|
33 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS: persisted partitioned data source table
|
42 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS: persisted bucketed data source table
|
39 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS: persisted partitioned bucketed data source table
|
44 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: the column order doesn't matter
|
39 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: mismatch column names
|
25 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: too many columns
|
40 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable - source and target are the same table
|
27 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: insertInto - source and target are the same table
|
25 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: saveAsTable[append]: less columns
|
24 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-15025: create datasource table with path with select
|
44 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-15269 external data source table creation
|
26 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: should keep data source entries in table properties when debug mode is on
|
1 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Infer schema for Hive serde tables
|
164 ms |
|
org.apache.spark.sql.hive.OptimizeHiveMetadataOnlyQuerySuite: SPARK-23877: validate metadata-only query pushes filters to metastore
|
149 ms |
|
org.apache.spark.sql.hive.OptimizeHiveMetadataOnlyQuerySuite: SPARK-23877: filter on projected expression
|
83 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: typed_count without grouping keys
|
70 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: typed_count without grouping keys and empty input
|
32 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: typed_count with grouping keys
|
57 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: typed_count fallback to sort-based aggregation
|
46 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: random input data types
|
8 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed] - with grouping keys - with empty input
|
7 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed] - with grouping keys - with non-empty input
|
24 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed] - without grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed] - without grouping keys - with non-empty input
|
15 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe] - with grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe] - with grouping keys - with non-empty input
|
97 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe] - without grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe] - without grouping keys - with non-empty input
|
60 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe] - with grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe] - with grouping keys - with non-empty input
|
101 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe] - without grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe] - without grouping keys - with non-empty input
|
75 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with distinct] - with grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with distinct] - with grouping keys - with non-empty input
|
157 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with distinct] - without grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with distinct] - without grouping keys - with non-empty input
|
101 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe] - with grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe] - with grouping keys - with non-empty input
|
28 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe] - without grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe] - without grouping keys - with non-empty input
|
14 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe] - with grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe] - with grouping keys - with non-empty input
|
20 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe] - without grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe] - without grouping keys - with non-empty input
|
16 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with distinct] - with grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with distinct] - with grouping keys - with non-empty input
|
24 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with distinct] - without grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with distinct] - without grouping keys - with non-empty input
|
15 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe] - with grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe] - with grouping keys - with non-empty input
|
77 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe] - without grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe] - without grouping keys - with non-empty input
|
63 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with distinct] - with grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with distinct] - with grouping keys - with non-empty input
|
144 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with distinct] - without grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with distinct] - without grouping keys - with non-empty input
|
132 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe, with distinct] - with grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe, with distinct] - with grouping keys - with non-empty input
|
85 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe, with distinct] - without grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + safe, with distinct] - without grouping keys - with non-empty input
|
89 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe] - with grouping keys - with empty input
|
0 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe] - with grouping keys - with non-empty input
|
43 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe] - without grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe] - without grouping keys - with non-empty input
|
16 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with distinct] - with grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with distinct] - with grouping keys - with non-empty input
|
29 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with distinct] - without grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with distinct] - without grouping keys - with non-empty input
|
16 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe, with distinct] - with grouping keys - with empty input
|
3 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe, with distinct] - with grouping keys - with non-empty input
|
25 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe, with distinct] - without grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + safe, with distinct] - without grouping keys - with non-empty input
|
16 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe, with distinct] - with grouping keys - with empty input
|
2 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe, with distinct] - with grouping keys - with non-empty input
|
101 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe, with distinct] - without grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [with partial + unsafe, with partial + safe, with distinct] - without grouping keys - with non-empty input
|
177 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe, with distinct] - with grouping keys - with empty input
|
1 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe, with distinct] - with grouping keys - with non-empty input
|
94 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe, with distinct] - without grouping keys - with empty input
|
3 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: randomized aggregation test - [typed, with partial + unsafe, with partial + safe, with distinct] - without grouping keys - with non-empty input
|
17 ms |
|
org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite: SPARK-18403 Fix unsafe data false sharing issue in ObjectHashAggregateExec
|
83 ms |
|
org.apache.spark.sql.hive.execution.SQLMetricsSuite: writing data out metrics: hive
|
27 ms |
|
org.apache.spark.sql.hive.execution.SQLMetricsSuite: writing data out metrics dynamic partition: hive
|
201 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-16975: Partitioned table with the column having '_' should be read correctly
|
556 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data
|
20 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-12218: 'Not' is included in ORC filter pushdown
|
52 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-13543: Support for specifying compression codec for ORC via option()
|
45 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: Default compression codec is snappy for ORC compression
|
30 ms |
|