跳到主要内容

SAP HANA

选择对端数据库:

数据链路

基本功能

功能说明
结构迁移

如目标不存在所选表,则自动根据源端元数据,结合映射生成对端创建语句并执行创建

全量数据迁移

逻辑迁移,通过顺序扫描表数据,将数据分批写入到对端数据库

增量实时同步

支持 INSERT, UPDATE, DELETE 常见 DML 同步
无主键表 UPDATE, DELETE 不同步(需手动勾选)

数据校验和订正

全量数据校验,并可选根据校验结果订正差异数据,支持定时,文档:创建定时校验订正任务

修改订阅

新增、删除、修改订阅表,支持历史数据迁移,文档:修改订阅

增量位点回溯

支持按照数据 ID、时间戳 回溯位点,重新消费过去一段时间的 CDC 数据

表名映射

支持 和源端保持一致, 转小写, 转大写, 以'_数字'后缀截取

元数据检索

从源端表查对端,查询设置过过滤条件的,查询设置过对端主键的

高级功能

功能说明
Trigger-based Incremental Data Sync

The DataJob automatically creates triggers on tables. These triggers capture INSERT, UPDATE, and DELETE events and write them to the CDC tables.

Removal of Target Data before Full Data Migration

Remove the existing data in the Target before running the Full Data Migration, applicable for DataJobs reruning and scheduled Full Data migrations.

Recreating Target Table

Recreate target tables before running the Full Data Migration, applicable for DataJobs reruning and scheduled Full Data migrations.

Incremental Data Write Conflict Resolution Rule

IGNORE: Ignore primary key conflicts (skip writing), REPLACE: Replace the entire row in case of primary key conflicts.

Handling of Zero Value for Time

Allow setting zero value for time to different data types to prevent errors when writing to the Target.

定时全量迁移

文档1:创建定时全量任务
文档2:定时全量实现增量数据迁移

自定义代码

文档1:创建自定义代码任务
文档2:自定义代码任务 debug
文档3:在自定义代码中打日志

数据过滤条件

支持 WHERE 条件进行数据过滤,内容为 SQL 92 子集,文档:创建数据过滤任务

限制和注意点

限制项说明
DDL Change Handling

BladePipe captures data changes in a source SAP HANA instance through triggers. DDL synchronization is not supported. If there are DDL changes, follow the steps in Change Schema in a Source SAP HANA Instance.

Hana Data Types in Incremental Sync

In the incremental data sync phase with a source Hana instance, it does not allow capturing changes for TEXT, BIN_TEXT, ST_POINT, and ST_GEOMETRY data types by triggers.


源端数据源

前置条件

条件说明
Permissions for Account

See Permissions Required for Hana

任务参数

参数名称说明
sysTriggerDataSchema

The schema name where the trigger writes incremental data.

sysTriggerDataTable

The table name where the trigger writes incremental data.

incrPagingCount

The total amount of data queried each time by the trigger during incremental data synchronization.

incrIdleSleepSecond

The interval between queries for the trigger during idle period of incremental data synchronization (in seconds).

incrScanIntervalMs

The interval between data queries for the trigger during incremental data synchronization (in milliseconds).

autoCheckTriggerAndReInstall

Check the trigger status and reinstall it when the DataJob starts.

triggerDataCleanEnabled

Enable scheduled cleanup of trigger incremental data.

triggerDataCleanIntervalMin

The cleanup interval for trigger incremental data (in minutes).

triggerDataRetentionMin

The retention time for trigger incremental data (in minutes).

dbHeartbeatEnable

Configure whether to enable heartbeat for the source database.

needTriggerDataJsonEscape

Whether to escape characters (\) in the trigger incremental JSON.

triggerDataJsonQuotation

Custom quotation marks for trigger incremental JSON.

triggerParamBathSize

Set the number of columns involved per variable in the trigger template.

fullBeforeImageEnabled

Enable the trigger to record the complete data before all column changes.

Tips: 通用参数配置请参考 通用参数及功能


目标端数据源

前置条件

条件说明
Permissions for Account

See Permissions Required for MySQL/MariaDB.

Port Preparation

Allow the migration and sync node (Worker) to connect to the MySQL/MariaDB port (e.g., 3306).

任务参数

参数名称说明
keyConflictStrategy

Strategy for handling primary key conflicts during write in Incremental DataTask:

  • IGNORE: Ignore conflicts (default)
  • REPLACE: Replace conflicts (optional)

dstWholeReplace

Convert INSERT and UPDATE operations into full row replacement in the Target.

deCycle

Enable filtering in bidirectional sync to filter DML/DDL with specific markers.

specialSqlMode

Set a specific SQL mode when initializing the connection between databases.

defaultGisSRID

Set the SRID for GIS data types.

dstTimeZone

Target time zone, e.g., +08:00, Asia/Shanghai, America/New_York, etc.

increParallelApplyStrategy

Parallel write strategy for relational databases in the Target:

  • KEY: Parallel writing to partitions separated based on primary keys.
  • TABLE: Parallel writing to partitions separated based on tables.
  • KEY_UPGRADE_TABLE: Parallel writing to partitions separated based on primary keys. Upgrade the partition to a table if there is an update to the unique key.

Tips: 通用参数配置请参考 通用参数及功能

数据链路

基本功能

高级功能

限制和注意点

使用示例

链路FAQ

源端数据源

前置条件

任务参数

目标端数据源

前置条件

任务参数