Dm data synchronization: upstream MySQL data fields exceed downstream TiDB data fields

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: dm 数据同步上游MySQL数据字段多余下游tidb数据字段

| username: TiDBer_QHSxuEa1

The official method works for the case where the downstream TiDB has more columns than the upstream MySQL, but when testing the case where the upstream MySQL has more columns than the downstream TiDB, it doesn’t work. How should this be handled?

| username: xfworld | Original post link

This sentence segmentation looks quite difficult… I’m impressed.

MySQL is synchronized to TiDB through DM, right?

Reference documentation:

It feels very troublesome…

| username: tidb菜鸟一只 | Original post link

If you have more columns upstream than downstream, DM cannot synchronize. When parsing the binlog, it won’t exclude columns that don’t exist downstream, so it will definitely report an error…

| username: WalterWj | Original post link

Synchronizing heterogeneous databases is already challenging, and you’re also trying to synchronize different table structures… What exactly is your requirement?

| username: TiDBer_QHSxuEa1 | Original post link

Testing DM functionality and heterogeneous data synchronization on your own.

| username: TiDBer_QHSxuEa1 | Original post link

I saw that the binlog-schema update command has --from-source and --from-target options, and I thought it could also exclude redundant data columns from the upstream.

| username: redgame | Original post link

This is not easy to handle, boss, it’s a table structure issue.

| username: ealam_小羽 | Original post link

The data volume is not particularly large, and the synchronization delay requirement is not particularly high. I feel that DataX can handle it. Its logic is to configure fields and form SQL for synchronization, so the amount doesn’t matter.

| username: Hacker007 | Original post link

It won’t work, it will report an error, there’s no way to handle it.