Options¶
Bases: FileDFWriteOptions, GenericOptions
Options for FileDFWriter.
Added in 0.9.0
Examples:
Note
You can pass any value supported by Spark,
even if it is not mentioned in this documentation. Option names should be in camelCase!
The set of supported options depends on Spark version.
from onetl.file import FileDFWriter
options = FileDFWriter.Options(
if_exists="replace_overlapping_partitions",
partitionBy="month",
)
if_exists = FileDFExistBehavior.APPEND
class-attribute
instance-attribute
¶
Behavior for existing target directory.
If target directory does not exist, it will be created. But if it does exist, then behavior is different for each value.
Changed in 0.13.0
Default value was changed from error to append
Possible values:
-
errorIf folder already exists, raise an exception.Same as Spark's
df.write.mode("error").save(). -
skip_entire_directoryIf folder already exists, left existing files intact and stop immediately without any errors.Same as Spark's
df.write.mode("ignore").save(). -
append(default) Appends data into existing directory.Behavior in details
-
Directory does not exist Directory is created using all the provided options (
format,partition_by, etc). -
Directory exists, does not contain partitions, but partition_by is set Data is appended to a directory, but to partitioned directory structure.
Warning
Existing files still present in the root of directory, but Spark will ignore those files while reading, unless using
recursive=True. -
Directory exists and contains partitions, but partition_by is not set Data is appended to a directory, but to the root of directory instead of nested partition directories.
Warning
Spark will ignore such files while reading, unless using
recursive=True. -
Directory exists and contains partitions, but with different partitioning schema than partition_by Data is appended to a directory with new partitioning schema.
Warning
Spark cannot read directory with multiple partitioning schemas, unless using
recursive=Trueto disable partition scanning. -
Directory exists and partitioned according partition_by, but partition is present only in dataframe New partition directory is created.
-
Directory exists and partitioned according partition_by, partition is present in both dataframe and directory New files are added to existing partition directory, existing files are sill present.
-
Directory exists and partitioned according partition_by, but partition is present only in directory, not dataframe Existing partition is left intact.
-
-
replace_overlapping_partitionsIf partitions from dataframe already exist in directory structure, they will be overwritten.Same as Spark's
df.write.option("partitionOverwriteMode", "dynamic").mode("overwrite").save().Danger
This mode does make sense ONLY if the directory is partitioned. IF NOT, YOU'LL LOOSE YOUR DATA!
Behavior in details
-
Directory does not exist Directory is created using all the provided options (
format,partition_by, etc). -
Directory exists, does not contain partitions, but partition_by is set Directory will be deleted, and will be created with partitions.
-
Directory exists and contains partitions, but partition_by is not set Directory will be deleted, and will be created with partitions.
-
Directory exists and contains partitions, but with different partitioning schema than partition_by Data is appended to a directory with new partitioning schema.
Warning
Spark cannot read directory with multiple partitioning schemas, unless using
recursive=Trueto disable partition scanning. -
Directory exists and partitioned according partition_by, but partition is present only in dataframe New partition directory is created.
-
Directory exists and partitioned according partition_by, partition is present in both dataframe and directory Partition directory will be deleted, and new one is created with files containing data from dataframe.
-
Directory exists and partitioned according partition_by, but partition is present only in directory, not dataframe Existing partition is left intact.
-
-
replace_entire_directoryRemove existing directory and create new one, overwriting all existing data. All existing partitions are dropped.Same as Spark's
df.write.option("partitionOverwriteMode", "static").mode("overwrite").save().
Note
Unlike using pure Spark, config option spark.sql.sources.partitionOverwriteMode
does not affect behavior of any mode
partition_by = Field(default=None, alias='partitionBy')
class-attribute
instance-attribute
¶
List of columns should be used for data partitioning. None means partitioning is disabled.
Each partition is a folder which contains only files with the specific column value,
like some.csv/col1=value1, some.csv/col1=value2, and so on.
Multiple partitions columns means nested folder structure, like some.csv/col1=val1/col2=val2.
If WHERE clause in the query contains expression like partition = value,
Spark will scan only files in a specific partition.
Examples: reg_id or ["reg_id", "business_dt"]
Note
Values should be scalars (integers, strings),
and either static (countryId) or incrementing (dates, years), with low
number of distinct values.
Columns like userId or datetime/timestamp should NOT be used for partitioning.