Incremental Strategy¶
Bases: HWMStrategy
Incremental strategy for DB Reader/File Downloader.
Used for fetching only new rows/files from a source by filtering items not covered by the previous [HWM][] value.
For DB Reader: First incremental run is just the same as SnapshotStrategy:
SELECT id, data FROM mydata;
id column (e.g. 1000) will be saved as HWM to [HWM Store][hwm].
Next incremental run will read only new data from the source:
SELECT id, data FROM mydata WHERE id > 1000; -- hwm value
id=1000 because it has been read before.
Warning
If code inside the context manager raised an exception, like:
with IncrementalStrategy():
df = reader.run() # something went wrong here
writer.run(df) # or here
# or here...
For File Downloader:
Behavior depends on hwm type.
FileListHWM
First incremental run is just the same as SnapshotStrategy - all files are downloaded:
$ hdfs dfs -ls /path
/path/my/file1
/path/my/file2
DownloadResult(
...,
successful={
LocalFile("/downloaded/file1"),
LocalFile("/downloaded/file2"),
},
)
FileListHWM object into [HWM Store][hwm]:
FileListHWM(
...,
entity="/path",
value=[
"/path/my/file1",
"/path/my/file2",
],
)
$ hdfs dfs -ls /path
/path/my/file1
/path/my/file2
/path/my/file3
# only files which are not covered by FileListHWM
DownloadResult(
...,
successful={
LocalFile("/downloaded/file3"),
},
)
FileListHWM will be updated and saved to [HWM Store][hwm]:
FileListHWM(
...,
directory="/path",
value=[
"/path/my/file1",
"/path/my/file2",
"/path/my/file3",
],
)
FileModifiedTimeHWM
First incremental run is just the same as SnapshotStrategy - all files are downloaded:
$ hdfs dfs -ls /path
/path/my/file1
/path/my/file2
DownloadResult(
...,
successful={
LocalFile("/downloaded/file1"),
LocalFile("/downloaded/file2"),
},
)
FileModifiedTimeHWM object into [HWM Store][hwm]:
FileModifiedTimeHWM(
...,
directory="/path",
value=datetime.datetime(2025, 1, 1, 11, 22, 33, 456789, tzinfo=timezone.utc),
)
$ hdfs dfs -ls /path
/path/my/file1
/path/my/file2
/path/my/file3
# only files which are not covered by FileModifiedTimeHWM
DownloadResult(
...,
successful={
LocalFile("/downloaded/file3"),
},
)
FileModifiedTimeHWM will be updated and and saved to [HWM Store][hwm]:
FileModifiedTimeHWM(
...,
directory="/path",
value=datetime.datetime(2025, 1, 1, 22, 33, 44, 567890, tzinfo=timezone.utc),
)
Warning
FileDownloader updates HWM in HWM Store at the end of .run() call,
NOT while exiting strategy context. This is because:
- FileDownloader does not raise exceptions if some file cannot be downloaded.
- FileDownloader creates files on local filesystem, and file content may differ for different modes.
- It can remove files from the source
if delete_source
is set to
True.
Added in 0.1.0
Parameters:
-
offset(Any, default:None) –If passed, the offset value will be used to read rows which appeared in the source after the previous read.
For example, previous incremental run returned rows:
Current HWM value is 1000.898 899 900 1000But since then few more rows appeared in the source:
and you need to read them too.898 899 900 901 # new 902 # new ... 999 # new 1000So you can set
offset=100, so a next incremental run will generate SQL query like:and return rows since 901 (not 900), including 1000 which was already captured by HWM.SELECT id, data FROM public.mydata WHERE id > 900; -- 900 = 1000 - 100 = hwm - offsetWarning
This can lead to reading duplicated values from the table. You probably need additional deduplication step to handle them
Warning
Cannot be used with File Downloader
Note
offsetvalue will be subtracted from the HWM, so it should have a proper type.For example, for
TIMESTAMPcolumnoffsettype should bedatetime.timedelta, notint
Examples:
from onetl.db import DBReader, DBWriter
from onetl.strategy import IncrementalStrategy
reader = DBReader(
connection=postgres,
source="public.mydata",
columns=["id", "data"],
hwm=DBReader.AutoDetectHWM(name="some_hwm_name", expression="id"),
)
writer = DBWriter(connection=hive, target="db.newtable")
with IncrementalStrategy():
df = reader.run()
writer.run(df)
-- previous HWM value was 1000
-- DBReader will generate query like:
SELECT id, data
FROM public.mydata
WHERE id > 1000; --- from HWM (EXCLUDING first row)
from onetl.db import DBReader, DBWriter
from onetl.strategy import IncrementalStrategy
reader = DBReader(
connection=postgres,
source="public.mydata",
columns=["id", "data"],
hwm=DBReader.AutoDetectHWM(name="some_hwm_name", expression="id"),
)
writer = DBWriter(connection=hive, target="db.newtable")
with IncrementalStrategy(offset=100):
df = reader.run()
writer.run(df)
-- previous HWM value was 1000
-- DBReader will generate query like:
SELECT id, data
FROM public.mydata
WHERE id > 900; -- from HWM-offset (EXCLUDING first row)
offset and hwm.expression can be a date or datetime, not only integer:
from onetl.db import DBReader, DBWriter
from datetime import timedelta
reader = DBReader(
connection=postgres,
source="public.mydata",
columns=["business_dt", "data"],
hwm=DBReader.AutoDetectHWM(name="some_hwm_name", expression="business_dt"),
)
writer = DBWriter(connection=hive, target="db.newtable")
with IncrementalStrategy(offset=timedelta(days=1)):
df = reader.run()
writer.run(df)
-- previous HWM value was '2021-01-10'
-- DBReader will generate query like:
SELECT business_dt, data
FROM public.mydata
WHERE business_dt > CAST('2021-01-09' AS DATE); -- from HWM-offset (EXCLUDING first row)
from onetl.db import DBReader, DBWriter
from onetl.strategy import IncrementalStrategy
reader = DBReader(
connection=kafka,
source="topic_name",
hwm=DBReader.AutoDetectHWM(name="some_hwm_name", expression="offset"),
)
writer = DBWriter(connection=hive, target="db.newtable")
with IncrementalStrategy():
df = reader.run()
# current run will fetch only messages which were added since previous run
from onetl.file import FileDownloader
from onetl.strategy import SnapshotStrategy
from etl_entities.hwm import FileListHWM
downloader = FileDownloader(
connection=sftp,
source_path="/remote",
local_path="/local",
hwm=FileListHWM( # mandatory for IncrementalStrategy
name="my_unique_hwm_name",
),
)
with IncrementalStrategy():
df = downloader.run()
# current run will download only files which were added since previous run
from onetl.file import FileDownloader
from onetl.strategy import SnapshotStrategy
from etl_entities.hwm import FileModifiedTimeHWM
downloader = FileDownloader(
connection=sftp,
source_path="/remote",
local_path="/local",
hwm=FileModifiedTimeHWM( # mandatory for IncrementalStrategy
name="my_unique_hwm_name",
),
)
with IncrementalStrategy():
df = downloader.run()
# current run will download only files which were modified/created since previous run