How to copy a table from one database to another within the same cluster?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 怎么把一个库里的表复制一份到相同集群的另一个库里?

| username: TiDBer_yUoxD0vR

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version]
[Reproduction Path] What operations were performed to encounter the issue
[Encountered Issue: Problem Phenomenon and Impact]
How to copy tables from one database to another within the same cluster?
For example, copy tables from the dbtest01 database to the dbtest02 database within the same cluster.
There are many tables, other than using insert into select.

[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]

| username: TiDBer_pkQ5q1l0 | Original post link

How about exporting and then importing the logic?

| username: TiDBer_yUoxD0vR | Original post link

When using loader to import data, you cannot specify which database to import into; it can only be imported into the original database name.

| username: TiDBer_pkQ5q1l0 | Original post link

After exporting, rename the file to the new database, and also change the schema file to the new database.

| username: tidb菜鸟一只 | Original post link

If the data does not move, first export the tables from the source database, then replace them with the new database tables, and then import them again.

| username: caiyfc | Original post link

Export all the data you want to import using Dumpling, either in SQL or CSV format. Then, batch rename the files, changing dbtest01 to dbtest02, and use Lightning to import them.

| username: dbaspace | Original post link

mydump

| username: Running | Original post link

For small amounts of data, you can use Navicat directly. For more complex scenarios, you can use tools like DataX.

| username: TiDBer_yUoxD0vR | Original post link

How can I ensure data consistency when exporting data with mydump? Adding the --single-transaction --master-data=2 parameters results in an error:

mysqldump: Couldn’t execute ‘FLUSH TABLES WITH READ LOCK’: FLUSH TABLES WITH READ LOCK is not supported. Please use @@tidb_snapshot (1105)

| username: TiDBer_pkQ5q1l0 | Original post link

Use Dumpling, it will export consistent data by default.

| username: tony5413 | Original post link

If the data volume is not large, use mydumper.

| username: maokl | Original post link

DATAX

| username: 胡杨树旁 | Original post link

Lightning can map during import.

| username: Kongdom | Original post link

I generally use ETL tools like Kettle for replication.

| username: magic | Original post link

Export the data file and then import it.

| username: 胡杨树旁 | Original post link

Is kettle a table or a job?

| username: liuis | Original post link

Export and then re-import.

| username: TiDBer_yUoxD0vR | Original post link

Lightning cannot be imported into other databases, right? I couldn’t find it in the documentation.

| username: Kongdom | Original post link

Yes, I usually make one table per job.

| username: caiyfc | Original post link

Just change the name. Lightning identifies the database and table names based on the source file’s filename.