org.apache.spark.sql.hive.HiveDataFrameWindowSuite: reuse window partitionBy
|
16 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: reuse window orderBy
|
13 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: lead
|
12 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: lag
|
11 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: lead with default value
|
15 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: lag with default value
|
15 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: rank functions in unspecific window
|
39 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: aggregation and rows between
|
16 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: aggregation and range betweens
|
15 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: aggregation and rows betweens with unbounded
|
22 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: aggregation and range betweens with unbounded
|
26 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: reverse sliding range frame
|
16 ms |
|
org.apache.spark.sql.hive.HiveDataFrameWindowSuite: reverse unbounded range frame
|
12 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table
|
6 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table with a user specified schema
|
6 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: persistent JSON table with a user specified schema with a subset of fields
|
6 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: resolve shortened provider names
|
5 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: drop table
|
5 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: check change without refresh
|
6 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: drop, change, recreate
|
3 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: invalidate cache and reload
|
5 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS
|
5 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS with IF NOT EXISTS
|
6 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: CTAS a managed table
|
5 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-5286 Fail to drop an invalid table when using the data source API
|
5 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-5839 HiveMetastoreCatalog does not recognize table aliases of data source tables.
|
11 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: save table
|
11 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: create external table
|
48 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: scan a parquet table created through a CTAS statement
|
17 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Pre insert nullability check (ArrayType)
|
11 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Pre insert nullability check (MapType)
|
10 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-6024 wide schema support
|
84 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-6655 still support a schema stored in spark.sql.sources.schema
|
6 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: Saving partition columns information
|
12 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: insert into a table
|
9 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: SPARK-8156:create table to specific database by 'use dbname'
|
8 ms |
|
org.apache.spark.sql.hive.MetastoreDataSourcesSuite: skip hive metadata on table creation
|
4 ms |
|
org.apache.spark.sql.hive.ParquetHiveCompatibilitySuite: simple primitives
|
18 ms |
|
org.apache.spark.sql.hive.ParquetHiveCompatibilitySuite: SPARK-10177 timestamp
|
12 ms |
|
org.apache.spark.sql.hive.ParquetHiveCompatibilitySuite: array
|
12 ms |
|
org.apache.spark.sql.hive.ParquetHiveCompatibilitySuite: map
|
11 ms |
|
org.apache.spark.sql.hive.ParquetHiveCompatibilitySuite: struct
|
11 ms |
|
org.apache.spark.sql.hive.QueryPartitionSuite: SPARK-5068: query data when path doesn't exist
|
0 ms |
|
org.apache.spark.sql.hive.UDFSuite: UDF case insensitive
|
23 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - StringType
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - BinaryType
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - BooleanType
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - ByteType
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - ShortType
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - IntegerType
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - LongType
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - FloatType
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - DoubleType
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - DecimalType(25,5)
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - DecimalType(6,5)
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - DateType
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - TimestampType
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - ArrayType(IntegerType,true)
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - MapType(StringType,LongType,true)
|
20 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: test all data types - StructType(StructField(f1,FloatType,true), StructField(f2,ArrayType(BooleanType,true),true))
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - Overwrite
|
6 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - Append
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - ErrorIfExists
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - non-partitioned table - Ignore
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - simple queries
|
14 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Overwrite
|
11 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Append
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - Ignore
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite
|
3 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append
|
3 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries
|
4 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite
|
4 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append
|
4 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values
|
3 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns
|
3 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: Hadoop style globbing
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-9735 Partition column type casting
|
12 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table
|
7 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-8406: Avoids name collision while writing files
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-8578 specified custom output committer will not be used to append data
|
0 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-8887: Explicitly define which data types can be used as dynamic partition columns
|
8 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-9899 Disable customized output committer when speculation is on
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: HadoopFsRelation produces UnsafeRow
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data
|
1 ms |
|
org.apache.spark.sql.hive.orc.OrcHadoopFsRelationSuite: SPARK-12218: 'Not' is included in ORC filter pushdown
|
4 ms |
|
org.apache.spark.sql.sources.CommitFailureTestRelationSuite: SPARK-7684: commitTask() failure should fallback to abortTask()
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - non-partitioned table - Append
|
6878940 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - non-partitioned table - ErrorIfExists
|
9 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - non-partitioned table - Ignore
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - simple queries
|
13 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Overwrite
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Append
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Append - new partition values
|
0 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - ErrorIfExists
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - Ignore
|
80 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Overwrite
|
9 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - non-partitioned table - Append
|
3 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - simple queries
|
6 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - boolean type
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Overwrite
|
7 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append
|
11 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - new partition values
|
10 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: saveAsTable()/load() - partitioned table - Append - mismatched partition columns
|
5 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: Hadoop style globbing
|
5 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-9735 Partition column type casting
|
22 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-7616: adjust column name order accordingly when saving partitioned table
|
9 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-8406: Avoids name collision while writing files
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-8578 specified custom output committer will not be used to append data
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-8887: Explicitly define which data types can be used as dynamic partition columns
|
9 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-9899 Disable customized output committer when speculation is on
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: HadoopFsRelation produces UnsafeRow
|
8 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: save()/load() - partitioned table - simple queries - partition columns in data
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-9894: save complex types to JSON
|
1 ms |
|
org.apache.spark.sql.sources.JsonHadoopFsRelationSuite: SPARK-10196: save decimal type to JSON
|
1 ms |
|