When updating a record in a table within a TiDB cluster, the primary cluster updates successfully, and replica 1 updates normally, but replica 2 fails to update the record correctly

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb集群中的某张表,主库集群某一条记录进行update更新时,从库1可以正常更新,从库2没有对此条记录进行正常更新

| username: vcdog

【TiDB Usage Environment】Production Environment
【TiDB Version】v6.5.0
【Reproduction Path】In a certain table of the TiDB cluster, when updating a record in the primary cluster, the update is normal in Replica 1, but not in Replica 2.
【Encountered Problem: Phenomenon and Impact】In a certain table of the TiDB cluster, when updating a record in the primary cluster, the update is normal in Replica 1, but not in Replica 2.


  1. Currently, the method used is to re-dump the data of this table from the primary cluster and then import it into Replica 2, and recreate the CDC task for synchronization.
  2. Several other business tables related to this table also have this issue. Is there a better way to directly query this table in Replica 2 for records with the same primary key?

【Resource Configuration】Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
【Attachments: Screenshots/Logs/Monitoring】

| username: zhaokede | Original post link

Are the synchronization configurations for Replica 1 and Replica 2 the same?

| username: vcdog | Original post link

The synchronization configuration is the same.

| username: xfworld | Original post link

Upgrade the TiDB cluster version. TiCDC from 6.5.0 to 6.5.9 has fixed over 200 bugs… :see_no_evil:

| username: yytest | Original post link

It is recommended to upgrade the cluster version and back up the current production data before upgrading.

| username: TIDB-Learner | Original post link

  1. Is the synchronization issue occurring with a specific table or all tables?
  2. Check if the table structure of the corresponding table in the primary and secondary databases 1 and 2 is consistent. Were there any modifications before the issue occurred?
  3. Check the status of the corresponding table’s changfeed and processor.
  4. Recreate the changfeed task for the table to see if it can return to normal.
| username: 健康的腰间盘 | Original post link

Go ahead and upgrade, version 7.5 is quite good to use.

| username: erwadba | Original post link

Try executing admin check table <table_name> on replica 2. There shouldn’t be duplicate primary key records.

| username: 呢莫不爱吃鱼 | Original post link

Upgrade to version 7.X.

| username: 小于同学 | Original post link

Is the configuration consistent?

| username: 小龙虾爱大龙虾 | Original post link

Please describe your scenario, what is the upstream database, what are the databases of Slave 1 and Slave 2, how is the data synchronized, and what is the current table structure of this table?

| username: yytest | Original post link

It is recommended to upgrade to the latest stable version 7.5.1.

| username: 健康的腰间盘 | Original post link

Upgrade version

| username: TiDBer_RjzUpGDL | Original post link

Upgrade, upgrade.

| username: 不想干活 | Original post link

Upgrade to 7.5.

| username: Jack-li | Original post link

It’s better to upgrade to the new version.