Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 请问ORACLE数据库中的数据怎么批量迁移到tidb里面
How can I migrate data from an ORACLE database to TiDB in bulk?
Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 请问ORACLE数据库中的数据怎么批量迁移到tidb里面
How can I migrate data from an ORACLE database to TiDB in bulk?
The simplest and most direct method is to export CSV from Oracle and then import it into TiDB.
However, the underlying syntax of Oracle and TiDB is different. Will there be any issues?
One is Oracle syntax, and the other is MySQL syntax.
Aren’t you importing data? What does it have to do with syntax? You created all the tables yourself.
Exporting to CSV and then importing with Lightning is also an option.
I tried using Navicat for data transfer, but the auto-increment IDs got all messed up, with some IDs jumping by 1000.
DataX supports full data migration but does not support incremental data migration. You need to handle some partitioning and processing yourself.
A full configuration of DataX tasks, or using CloudCanal, which is more convenient.
The image and link you provided seem to be related to TiDB and its applications in the financial sector. You can visit the link for more detailed resources and information.
Please use PingCAP database
or
use Debezium to solve it yourself.
Everyone talks about CSV, but when the data volume is large, operating with CSV doesn’t feel very efficient. It’s better to use a tool for migration.
The tool is TransferDB, which supports heterogeneous data structure conversion from Oracle to MySQL/TiDB. The project homepage is GitHub - wentaojin/transferdb: TransferDB 支持异构数据库 schema 转换、全量数据导出导入以及增量数据同步功能( Oracle 数据库 -> MySQL/TiDB 数据库).
This tool was developed by a senior member of PingCAP. Although it has not been officially released, it is indeed quite useful. TransferDB is part of the commonly used TiDB operation and maintenance toolset (TiDBA), which also includes collecting statistical information, Mok parsing key, based on region key, data range, data estimation to generate scatter statements, viewing table data and index region leader distribution, version upgrades, comparing 3.0 and 4.0 configuration files and TiDB system variables, etc. It can be said to be very practical. Its project homepage is GitHub - wentaojin/tidba: TiDB 数据库 DBA 常用工具集.
Friends who have used Lightning will certainly not be unfamiliar with this tool. From the configuration file to running the program, it can be said to be almost identical. The operation manual that comes with the project is also very detailed.
It includes the following core functions: schema conversion, table structure check, migration cost assessment, data migration (full or incremental), CSV export, etc. Some of these functions are still experimental features. Here, I only used its core feature, schema conversion.
Its configuration file parameters are very rich, the comments are very clear, and it is very simple to use. For the schema conversion scenario, you only need to modify the connection information in the [source] and [target] sections. The detailed configuration list can be found here: https://github.com/wentaojin/transferdb/blob/main/conf/config.toml
You can use DataX for full data migration, or you can try CloudCanal.