You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* docs: remove all mentions of native_comet scan
* update
* prettier
* docs: improve parquet_scans.md accuracy and completeness
Fix grammar, add encryption fallback and native_iceberg_compat
hard-coded config limitations, clarify S3 section applies to both
scan implementations, and remove orphaned link references.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* update config docs
* prettier
* docs: clarify parquet scan limitations and fallback behavior
Clarify which limitations fall back to Spark vs which may produce
incorrect results. Add missing documented limitations for
native_datafusion (DPP, input_file_name, metadata columns). Fix
misleading wording for ignoreCorruptFiles/ignoreMissingFiles. Note
that auto mode currently always selects native_iceberg_compat.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: remove redundant fallback language in native_datafusion section
The section intro already states all limitations fall back to Spark,
so individual bullet points don't need to repeat it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: separate fallback limitations from incorrect-results limitations
Restructure shared and per-scan limitation lists into two clear
categories: features that fall back to Spark (safe) and issues that
may produce incorrect results without falling back. Remove redundant
"Comet falls back to Spark" from individual bullets where the section
intro already states it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix
* update
* remove encryption from unsupported list, move DPP to common list
* Update docs/source/contributor-guide/parquet_scans.md
Co-authored-by: Oleks V <comphead@users.noreply.github.com>
* address feedback
* address feedback
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Matt Butrovich <mbutrovich@users.noreply.github.com>
Co-authored-by: Oleks V <comphead@users.noreply.github.com>
|`native_comet`|**Deprecated.** This implementation provides strong compatibility with Spark but does not support complex types. This is the original scan implementation in Comet and will be removed in a future release. |
32
-
|`native_iceberg_compat`| This implementation delegates to DataFusion's `DataSourceExec` but uses a hybrid approach of JVM and native code. This scan is designed to be integrated with Iceberg in the future. |
33
-
|`native_datafusion`| This experimental implementation delegates to DataFusion's `DataSourceExec` for full native execution. There are known compatibility issues when using this scan. |
34
-
35
-
The `native_datafusion` and `native_iceberg_compat` scans provide the following benefits over the `native_comet`
36
-
implementation:
37
-
38
-
- Leverages the DataFusion community's ongoing improvements to `DataSourceExec`
39
-
- Provides support for reading complex types (structs, arrays, and maps)
40
-
- Delegates Parquet decoding to native Rust code rather than JVM-side decoding
41
-
- Improves performance
42
-
43
-
> **Note on mutable buffers:** Both `native_comet` and `native_iceberg_compat` use reusable mutable buffers
44
-
> when transferring data from JVM to native code via Arrow FFI. The `native_iceberg_compat` implementation uses DataFusion's native Parquet reader for data columns, bypassing Comet's mutable buffer infrastructure entirely. However, partition columns still use `ConstantColumnReader`, which relies on Comet's mutable buffers that are reused across batches. This means native operators that buffer data (such as `SortExec` or `ShuffleWriterExec`) must perform deep copies to avoid data corruption.
45
-
> See the [FFI documentation](ffi.md) for details on the `arrow_ffi_safe` flag and ownership semantics.
46
-
47
-
The `native_datafusion` and `native_iceberg_compat` scans share the following limitations:
48
-
49
-
- When reading Parquet files written by systems other than Spark that contain columns with the logical type `UINT_8`
50
-
(unsigned 8-bit integers), Comet may produce different results than Spark. Spark maps `UINT_8` to `ShortType`, but
51
-
Comet's Arrow-based readers respect the unsigned type and read the data as unsigned rather than signed. Since Comet
52
-
cannot distinguish `ShortType` columns that came from `UINT_8` versus signed `INT16`, by default Comet falls back to
53
-
Spark when scanning Parquet files containing `ShortType` columns. This behavior can be disabled by setting
54
-
`spark.comet.scan.unsignedSmallIntSafetyCheck=false`. Note that `ByteType` columns are always safe because they can
55
-
only come from signed `INT8`, where truncation preserves the signed value.
56
-
- No support for default values that are nested types (e.g., maps, arrays, structs). Literal default values are supported.
57
-
- No support for datetime rebasing detection or the `spark.comet.exceptionOnDatetimeRebase` configuration. When reading
58
-
Parquet files containing dates or timestamps written before Spark 3.0 (which used a hybrid Julian/Gregorian calendar),
59
-
the `native_comet` implementation can detect these legacy values and either throw an exception or read them without
60
-
rebasing. The DataFusion-based implementations do not have this detection capability and will read all dates/timestamps
61
-
as if they were written using the Proleptic Gregorian calendar. This may produce incorrect results for dates before
62
-
October 15, 1582.
63
-
- No support for Spark's Datasource V2 API. When `spark.sql.sources.useV1SourceList` does not include `parquet`,
64
-
Spark uses the V2 API for Parquet scans. The DataFusion-based implementations only support the V1 API, so Comet
65
-
will fall back to `native_comet` when V2 is enabled.
66
-
67
-
The `native_datafusion` scan has some additional limitations:
22
+
Comet currently has two distinct implementations of the Parquet scan operator.
`spark.sql.parquet.inferTimestampNTZ.enabled`, and `spark.sql.legacy.parquet.nanosAsLong`. See
73
+
[issue #1816](https://github.com/apache/datafusion-comet/issues/1816) for more details.
81
74
82
-
The `native_comet` Parquet scan implementation reads data from S3 using the [Hadoop-AWS module](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html), which
83
-
is identical to the approach commonly used with vanilla Spark. AWS credential configuration and other Hadoop S3A
84
-
configurations works the same way as in vanilla Spark.
85
-
86
-
### `native_datafusion` and `native_iceberg_compat`
75
+
## S3 Support
87
76
88
77
The `native_datafusion` and `native_iceberg_compat` Parquet scan implementations completely offload data loading
89
78
to native code. They use the [`object_store` crate](https://crates.io/crates/object_store) to read data from S3 and
@@ -95,7 +84,8 @@ continue to work as long as the configurations are supported and can be translat
@@ -108,7 +98,8 @@ All configuration options support bucket-specific overrides using the pattern `f
108
98
109
99
#### Examples
110
100
111
-
The following examples demonstrate how to configure S3 access with the `native_datafusion` Parquet scan implementation using different authentication methods.
101
+
The following examples demonstrate how to configure S3 access with the `native_datafusion` and `native_iceberg_compat`
102
+
Parquet scan implementations using different authentication methods.
The S3 support of `native_datafusion` has the following limitations:
134
+
The S3 support of `native_datafusion`and `native_iceberg_compat`has the following limitations:
144
135
145
136
1.**Partial Hadoop S3A configuration support**: Not all Hadoop S3A configurations are currently supported. Only the configurations listed in the tables above are translated and applied to the underlying `object_store` crate.
146
137
147
138
2.**Custom credential providers**: Custom implementations of AWS credential providers are not supported. The implementation only supports the standard credential providers listed in the table above. We are planning to add support for custom credential providers through a JNI-based adapter that will allow calling Java credential providers from native code. See [issue #1829](https://github.com/apache/datafusion-comet/issues/1829) for more details.
0 commit comments