How to remedy TiDB DM synchronization after accidentally resetting the master database?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb DM 同步,不小心主库 reset master,怎么补救

| username: love-cat

[TiDB Usage Environment] Production Environment
[TiDB Version] 5.2.2
[Reproduction Path]
[Encountered Problem: Problem Phenomenon and Impact]
Incremental synchronization from MySQL to TiDB using the DM tool was interrupted due to an accidental execution of reset master on the MySQL (primary) database. How can this be remedied?
[Resource Configuration]
[Attachment: Screenshot/Log/Monitoring]

| username: TiDBer_小阿飞 | Original post link

You can only rebuild it, right? Because your binlog logs are all empty.

| username: love-cat | Original post link

If it’s MySQL → MySQL synchronization, you can reset the master from the slave, but I don’t know how to solve MySQL → TiDB, and TiDB also synchronizes with other database instances.

#############################################
I use DM for MySQL → TiDB with full + incremental mode. If I reset the master on the primary database, can I change the task to incremental synchronization?

I see the following configuration in the DM task:

----------- Instance Configuration -----------

mysql-instances:

source-id: "mysql-replica-01"           # Corresponds to `source-id` in source.toml
meta:                                   # The starting position of binlog migration when `task-mode` is `incremental` and the `checkpoint` in the downstream database does not exist; if the checkpoint exists, it takes precedence. If neither `meta` nor the downstream database's `checkpoint` exists, migration starts from the latest binlog position in the upstream.
  binlog-name: binlog.000001
  binlog-pos: 4
  binlog-gtid: "03fc0263-28c7-11e7-a653-6c0b84d59f30:1-7041423,05474d3c-28c7-11e7-8352-203db246dd3d:1-170"  # For incremental tasks with `enable-gtid: true` specified in the source, this value needs to be specified.
| username: Fly-bird | Original post link

It should have the same effect as MySQL.

| username: love-cat | Original post link

I am synchronizing data from two MySQL instances and two DM workers to TiDB, using DM (full + incremental) for synchronization. In TiDB, I can’t perform a reset master like in MySQL (slave). I am considering changing the existing DM task from full + incremental to incremental mode only. Is this feasible?

| username: love-cat | Original post link

Currently, the production environment cannot be deleted and rebuilt. I have an idea and I’m not sure if it will work:

  1. Delete the current synchronization task (full + incremental), but keep the database in TiDB.
  2. Restart an incremental task to perform incremental synchronization.
| username: TiDBer_小阿飞 | Original post link

In that case, won’t your data be messed up? Because after you reset the master, all the binlog logs are essentially restarted. So, from which binlog do you start updating the incremental data? It can’t identify the breakpoint, so there’s no way to do incremental updates!

Maybe you can find a point in time in the TIDB database, delete the data, and then resynchronize the data from that point in time? Not sure if that would work.

| username: 有猫万事足 | Original post link

I think this approach is reliable.

Try checking

tiup dmctl binlog --help

to see if there is any way to set the binlog position.

As a last resort, you can stop the task, then go into the dm_meta database, find the corresponding {task_name}_syncer_checkpoint table, and set the values of the binlog_name, binlog_position, and binlog_gtid columns to point to the new starting position of the binlog. Then try resume_task.

| username: love-cat | Original post link

I deleted the tables in dm_meta, stopped the dm-task, and then recreated the dm-task, specifying the binlog name and GTID to create an incremental task. Now the synchronization is normal. Your method should also work.

| username: love-cat | Original post link

It shouldn’t be messed up. I executed reset master on the master in a normal synchronous state, which caused the master-slave to disconnect. Then, I specified binlog-1 and gitID 1 through the DM task to rebuild the synchronization task.

| username: TiDBer_QHSxuEa1 | Original post link

Can DM ensure that it has caught up with the binlog before MySQL reset? Is there a possibility that some synchronization has not been completed?
Update the binlog_name and binlog_pos recorded in the <task_name>_syncer_checkpoint table in the downstream dm_meta database, and then resume the task. Not sure if this will work.

| username: love-cat | Original post link

I reset the master, and the gitid has changed. The records in dm_meta still have the original gitid, so these gitids are no longer useful.

| username: andone | Original post link

Rebuild it.