How to Migrate Large Volumes of Data from TiDB to MySQL Database

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb 大批量数据如何迁移到MySQL数据库

| username: 孤独的狼

How to migrate a large amount of data from TiDB to a MySQL database? There is a table that is 1TB in size, which is quite large. Are there any recommended methods for migration?

| username: 啦啦啦啦啦 | Original post link

You can try TiCDC.

| username: 孤独的狼 | Original post link

How to solve the problem of the initial full data? This table is too large. Using mysqldump takes too long.

| username: 啦啦啦啦啦 | Original post link

Check out Dumpling, it supports multi-threaded export.

| username: 孤独的狼 | Original post link

Even with multiple threads, the speed of logical export is limited. Are there other tools to solve this problem? If the data in the TiDB database is 1TB, what should be the size of the exported file? What is the ratio?

| username: 孤独的狼 | Original post link

Even with multiple threads or logical export, the speed is limited. Are there any other ways to solve this problem? The data in TiDB is 1TB in the database. What should be the size of the exported file? What is the ratio? Please advise.

| username: 啦啦啦啦啦 | Original post link

Synchronization of heterogeneous databases can only consider logical import and export, which is related to the size of the exported SQL file, data types, compression, etc. Without actually exporting and checking, it is difficult to estimate.

| username: 孤独的狼 | Original post link

Okay. TiDB doesn’t have this kind of test data either, right?

| username: 孤独的狼 | Original post link

Expected export ratio

| username: Billmay表妹 | Original post link

After using TiDB, why do you need to export it back to MySQL? If you need a backup, you can also set up another TiDB for backup.

| username: zhanggame1 | Original post link

I have tried using Navicat to transfer data between two databases. It works fine with a small amount of data, but TiDB tends to report OOM (Out of Memory) errors when dealing with larger data volumes.

| username: db_user | Original post link

You can try using SELECT INTO OUTFILE, split based on ID or other criteria, then compress the SQL file and upload it to the MySQL server, decompress it, and use LOAD DATA. If MySQL is not being used for business at this time, you can set trx_commit and sync_binlog to 0. Personally, I feel this method might be faster.

| username: 孤独的狼 | Original post link

Okay. Thanks everyone, I’ll take another look.

| username: huhaifeng | Original post link

Exporting with Dumpling is relatively fast; however, the time required to import into MySQL depends on the MySQL tools used.
If you need incremental synchronization, it seems like only CDC can be used.
Lastly, I’m curious: why do you need to move such a large amount of data back to MySQL? Typically, such large data sets are transferred from MySQL to TiDB, right?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.