Skip to content

Excel

Bases: ReadWriteFileFormat

Excel file format. support hooks

Based on Spark Excel file format.

Supports reading/writing files with .xlsx (read/write) and .xls (read only) extensions.

Version compatibility
  • Spark versions: 3.2.x - 4.0.x

    Warning

    Not all combinations of Spark version and package version are supported. See Maven index and official documentation.

  • Java versions: 8 - 22

See documentation from link above.

Added in 0.9.4

Examples:

Note

You can pass any option mentioned in official documentation. Option names should be in camelCase!

The set of supported options depends on spark-excel package version.

from pyspark.sql import SparkSession
from onetl.file.format import Excel

# Create Spark session with Excel package loaded
maven_packages = Excel.get_packages(
    package_version="0.31.2",
    spark_version="3.5.8",
)
spark = (
    SparkSession.builder.appName("spark-app-name")
    .config("spark.jars.packages", ",".join(maven_packages))
    .getOrCreate()
)

excel = Excel(header=True, inferSchema=True)
# Create Spark session with Excel package loaded
spark = ...

from onetl.file.format import XML

excel = Excel(header=True, dataAddress="'Sheet1'!A1")

header = False class-attribute instance-attribute

If True, the first row in file is conditioned as a header. Default False.

dataAddress = None class-attribute instance-attribute

Cell address used as starting point. For example: 'A1' or 'Sheet1'!A1

treatEmptyValuesAsNulls = None class-attribute instance-attribute

If True, empty cells are parsed as null values. If False, empty cells are parsed as empty strings. Default True.

Note

Used only for reading files.

setErrorCellsToFallbackValues = None class-attribute instance-attribute

If True, cells containing #N/A value are replaced with default value for column type, e.g. 0 for IntegerType(). If False, #N/A values are replaced with null. Default False.

Note

Used only for reading files.

usePlainNumberFormat = None class-attribute instance-attribute

If True, read or write numeric values with plain format, without using scientific notation or rounding. Default False.

inferSchema = None class-attribute instance-attribute

If True, infer DataFrame schema based on cell content. If False and no explicit DataFrame schema is passed, all columns are StringType().

Note

Used only for reading files.

timestampFormat = None class-attribute instance-attribute

Format string used for parsing or serializing timestamp values. Default yyyy-mm-dd hh:mm:ss[.fffffffff].

maxRowsInMemory = None class-attribute instance-attribute

If set, use streaming reader and fetch only specified number of rows per iteration. This reduces memory usage for large files. Default None, which means reading the entire file content to memory.

Warning

Can be used only with .xlsx files, but fails on .xls.

Note

Used only for reading files.

maxByteArraySize = None class-attribute instance-attribute

If set, overrides memory limit (in bytes) of byte array size used for reading rows from input file. Default 0, which means using default limit.

See IOUtils.setByteArrayMaxOverride documentation.

Note

Used only for reading files.

tempFileThreshold = None class-attribute instance-attribute

If value is greater than 0, large zip entries will be written to temporary files after reaching this threshold. If value is 0, all zip entries will be written to temporary files. If value is -1, no temp files will be created, which may cause errors if zip entry is larger than 2GiB.

Note

Used only for reading files.

excerptSize = None class-attribute instance-attribute

If inferSchema=True, set number of rows to infer schema from. Default 10.

Note

Used only for reading files.

workbookPassword = None class-attribute instance-attribute

If Excel file is encrypted, provide password to open it.

Note

Used only for reading files. Cannot be used to write files.

get_packages(package_version, spark_version, scala_version=None) classmethod

Get package names to be downloaded by Spark. support hooks

Warning

Not all combinations of Spark version and package version are supported. See Maven index and official documentation.

Added in 0.9.4

Changed in 0.14.0

Maven package com.crealytics:spark-excel was renamed to dev.mauch:spark-excel.

Parameters:

  • package_version (str) –

    Package version in format major.minor.patch.

    Changed in 0.14.0

    This parameter is now mandatory.

  • spark_version (str) –

    Spark version in format major.minor.patch.

  • scala_version (str, default: None ) –

    Scala version in format major.minor.

    If None, spark_version is used to determine Scala version.

Examples:

from onetl.file.format import Excel

Excel.get_packages(
    package_version="0.31.2",
    spark_version="3.5.8",
)
Excel.get_packages(
    package_version="0.31.2",
    spark_version="3.5.8",
    scala_version="2.12",
)