Skip to content

Latest commit

 

History

History
187 lines (133 loc) · 9.33 KB

File metadata and controls

187 lines (133 loc) · 9.33 KB

Compatibility Guide

Comet aims to provide consistent results with the version of Apache Spark that is being used.

This guide offers information about areas of functionality where there are known differences.

Parquet

Comet has the following limitations when reading Parquet files:

  • Comet does not support reading decimals encoded in binary format.
  • No support for default values that are nested types (e.g., maps, arrays, structs). Literal default values are supported.

ANSI Mode

Comet will fall back to Spark for the following expressions when ANSI mode is enabled. These expressions can be enabled by setting spark.comet.expression.EXPRNAME.allowIncompatible=true, where EXPRNAME is the Spark expression class name. See the Comet Supported Expressions Guide for more information on this configuration setting.

  • Average (supports all numeric inputs except decimal types)
  • Cast (in some cases)

There is an epic where we are tracking the work to fully implement ANSI support.

Floating-point Number Comparison

Spark normalizes NaN and zero for floating point numbers for several cases. See NormalizeFloatingNumbers optimization rule in Spark. However, one exception is comparison. Spark does not normalize NaN and zero when comparing values because they are handled well in Spark (e.g., SQLOrderingUtil.compareFloats). But the comparison functions of arrow-rs used by DataFusion do not normalize NaN and zero (e.g., arrow::compute::kernels::cmp::eq). So Comet adds additional normalization expression of NaN and zero for comparisons, and may still have differences to Spark in some cases, especially when the data contains both positive and negative zero. This is likely an edge case that is not of concern for many users. If it is a concern, setting spark.comet.exec.strictFloatingPoint=true will make relevant operations fall back to Spark.

Incompatible Expressions

Expressions that are not 100% Spark-compatible will fall back to Spark by default and can be enabled by setting spark.comet.expression.EXPRNAME.allowIncompatible=true, where EXPRNAME is the Spark expression class name. See the Comet Supported Expressions Guide for more information on this configuration setting.

Aggregate Expressions

  • CollectSet: Comet deduplicates NaN values (treats NaN == NaN) while Spark treats each NaN as a distinct value. When spark.comet.exec.strictFloatingPoint=true, collect_set on floating-point types falls back to Spark unless spark.comet.expression.CollectSet.allowIncompatible=true is set.

Array Expressions

  • ArrayUnion: Sorts input arrays before performing the union, while Spark preserves the order of the first array and appends unique elements from the second. #3644
  • SortArray: Nested arrays with Struct or Null child values are not supported natively and will fall back to Spark.

Date/Time Expressions

  • Hour, Minute, Second: Incorrectly apply timezone conversion to TimestampNTZ inputs. TimestampNTZ stores local time without timezone, so no conversion should be applied. These expressions work correctly with Timestamp inputs. #3180
  • TruncTimestamp (date_trunc): Produces incorrect results when used with non-UTC timezones. Compatible when timezone is UTC. #2649

Struct Expressions

  • StructsToJson (to_json): Does not support +Infinity and -Infinity for numeric types (float, double). #3016

Regular Expressions

Comet uses the Rust regexp crate for evaluating regular expressions, and this has different behavior from Java's regular expression engine. Comet will fall back to Spark for patterns that are known to produce different results, but this can be overridden by setting spark.comet.expression.regexp.allowIncompatible=true.

Window Functions

Comet's support for window functions is incomplete and known to be incorrect. It is disabled by default and should not be used in production. The feature will be enabled in a future release. Tracking issue: #2721.

Round-Robin Partitioning

Comet's native shuffle implementation of round-robin partitioning (df.repartition(n)) is not compatible with Spark's implementation and is disabled by default. It can be enabled by setting spark.comet.native.shuffle.partitioning.roundrobin.enabled=true.

Why the incompatibility exists:

Spark's round-robin partitioning sorts rows by their binary UnsafeRow representation before assigning them to partitions. This ensures deterministic output for fault tolerance (task retries produce identical results). Comet uses Arrow format internally, which has a completely different binary layout than UnsafeRow, making it impossible to match Spark's exact partition assignments.

Comet's approach:

Instead of true round-robin assignment, Comet implements round-robin as hash partitioning on ALL columns. This achieves the same semantic goals:

  • Even distribution: Rows are distributed evenly across partitions (as long as the hash varies sufficiently - in some cases there could be skew)
  • Deterministic: Same input always produces the same partition assignments (important for fault tolerance)
  • No semantic grouping: Unlike hash partitioning on specific columns, this doesn't group related rows together

The only difference is that Comet's partition assignments will differ from Spark's. When results are sorted, they will be identical to Spark. Unsorted results may have different row ordering.

Cast

Cast operations in Comet fall into three levels of support:

  • C (Compatible): The results match Apache Spark
  • I (Incompatible): The results may match Apache Spark for some inputs, but there are known issues where some inputs will result in incorrect results or exceptions. The query stage will fall back to Spark by default. Setting spark.comet.expression.Cast.allowIncompatible=true will allow all incompatible casts to run natively in Comet, but this is not recommended for production use.
  • U (Unsupported): Comet does not provide a native version of this cast expression and the query stage will fall back to Spark.
  • N/A: Spark does not support this cast.

String to Decimal

Comet's native CAST(string AS DECIMAL) implementation matches Apache Spark's behavior, including:

  • Leading and trailing ASCII whitespace is trimmed before parsing.
  • Null bytes (\u0000) at the start or end of a string are trimmed, matching Spark's UTF8String behavior. Null bytes embedded in the middle of a string produce NULL.
  • Fullwidth Unicode digits (U+FF10–U+FF19, e.g. 123.45) are treated as their ASCII equivalents, so CAST('123.45' AS DECIMAL(10,2)) returns 123.45.
  • Scientific notation (e.g. 1.23E+5) is supported.
  • Special values (inf, infinity, nan) produce NULL.

String to Timestamp

Comet's native CAST(string AS TIMESTAMP) implementation supports all timestamp formats accepted by Apache Spark, including ISO 8601 date-time strings, date-only strings, time-only strings (HH:MM:SS), embedded timezone offsets (e.g. +07:30, GMT-01:00, UTC), named timezone suffixes (e.g. Europe/Moscow), and the full Spark timestamp year range (-290308 to 294247). Note that CAST(string AS DATE) is only compatible for years between 262143 BC and 262142 AD due to an underlying library limitation.

Decimal with Negative Scale to String

Casting a DecimalType with a negative scale to StringType is marked as incompatible when spark.sql.legacy.allowNegativeScaleOfDecimal is false (the default). When that config is disabled, Spark cannot create negative-scale decimals, so Comet falls back to avoid running native execution on unexpected inputs.

When spark.sql.legacy.allowNegativeScaleOfDecimal=true, the cast is compatible. Comet matches Spark's behavior of using Java BigDecimal.toString() semantics, which produces scientific notation (e.g. a value of 12300 stored as Decimal(7,-2) with unscaled value 123 is rendered as "1.23E+4").

Legacy Mode

Try Mode

ANSI Mode

See the tracking issue for more details.