Update Statement Error: TiKV Max Timestamp is Not Synced

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: update语句报错TiKV max timestamp is not synced

| username: Kongdom

[TiDB Usage Environment] Test
[TiDB Version] v7.1.0
[Reproduction Path] None
[Encountered Issue: Problem Phenomenon and Impact]
Executing select statements is normal, but executing update statements results in an error:

Error occurred during SQL query execution

Reason:
SQL Error [9011] [HY000]: TiKV max timestamp is not synced

One server, 1 TiDB, 1 PD, 1 TiKV, replica count 3. Has anyone encountered this situation?

tidb.log

[2023/12/04 11:11:00.257 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079800581555307] [region=“{ region id: 596255, ver: 12, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:01.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1280050]
[2023/12/04 11:11:03.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=50100]
[2023/12/04 11:11:05.582 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079800581555307] [newTTL=90049]
[2023/12/04 11:11:11.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1290050]
[2023/12/04 11:11:13.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=60100]
[2023/12/04 11:11:15.271 +08:00] [INFO] [domain.go:2652] [“refreshServerIDTTL succeed”] [serverID=1179668] [“lease id”=7a0d8c32b5e91416]
[2023/12/04 11:11:15.583 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079800581555307] [newTTL=100049]
[2023/12/04 11:11:15.801 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 596255, meta: id:596255 start_key:"t\200\000\000\000\000\000\000\025" end_key:"t\200\000\000\000\000\000\000\025_i\200\000\000\000\000\000\000\002\003\200\000\000\000\000\007\227O" region_epoch:<conf_ver:1 version:12 > peers:<id:596256 store_id:1 > , peer: id:596256 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:14.297482197+08:00\nmax timestamp not synced, ctx: region ID: 596255, meta: id:596255 start_key:"t\200\000\000\000\000\000\000\025" end_key:"t\200\000\000\000\000\000\000\025_i\200\000\000\000\000\000\000\002\003\200\000\000\000\000\007\227O" region_epoch:<conf_ver:1 version:12 > peers:<id:596256 store_id:1 > , peer: id:596256 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:14.798682359+08:00\nmax timestamp not synced, ctx: region ID: 596255, meta: id:596255 start_key:"t\200\000\000\000\000\000\000\025" end_key:"t\200\000\000\000\000\000\000\025_i\200\000\000\000\000\000\000\002\003\200\000\000\000\000\007\227O" region_epoch:<conf_ver:1 version:12 > peers:<id:596256 store_id:1 > , peer: id:596256 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:15.300027358+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:15.802 +08:00] [WARN] [session.go:967] [“can not retry txn”] [label=internal] [error=“[tikv:9011]TiKV max timestamp is not synced”] [IsBatchInsert=false] [IsPessimistic=true] [InRestrictedSQL=true] [tidb_retry_limit=10] [tidb_disable_txn_auto_retry=true]
[2023/12/04 11:11:15.802 +08:00] [WARN] [session.go:983] [“commit failed”] [“finished txn”=“Txn{state=invalid}”] [error=“[tikv:9011]TiKV max timestamp is not synced”]
[2023/12/04 11:11:15.802 +08:00] [WARN] [session.go:2239] [“run statement failed”] [schemaVersion=476391] [error=“previous statement: insert into mysql.stats_meta (version, table_id, modify_count, count) values (446079800581555307, 1862, 1, 0) on duplicate key update version = values(version), modify_count = modify_count + values(modify_count), count = count + values(count): [tikv:9011]TiKV max timestamp is not synced”] [session=“{\n "currDBName": "",\n "id": 0,\n "status": 2,\n "strictMode": true,\n "user": null\n}”]
[2023/12/04 11:11:21.247 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1300050]
[2023/12/04 11:11:23.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=70100]
[2023/12/04 11:11:25.809 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=30000]
[2023/12/04 11:11:31.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1310050]
[2023/12/04 11:11:33.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=80100]
[2023/12/04 11:11:33.851 +08:00] [WARN] [expensivequery.go:118] [expensive_query] [cost_time=60.01317402s] [conn=2594117365830517149] [user=root] [database=enjoycrm_cs] [txn_start_ts=446079810568980924] [mem_max=“0 Bytes (0 Bytes)”] [sql=COMMIT]
[2023/12/04 11:11:35.810 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=40001]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 194781, ver: 618, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 96760, ver: 1877, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 351071, ver: 489, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 194785, ver: 618, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 93046, ver: 488, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:41.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1320050]
[2023/12/04 11:11:43.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=90100]
[2023/12/04 11:11:45.809 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=50000]
[2023/12/04 11:11:51.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1330050]
[2023/12/04 11:11:53.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=100100]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 93046, meta: id:93046 start_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\003\004\031\257\251N\314\000\000\000\003\200\000\000\000\000\nK\341" end_key:"t\200\000\000\000\000\000\006\006_r\200\000\000\000\000\003\346\031" region_epoch:<conf_ver:1 version:488 > peers:<id:93047 store_id:1 > , peer: id:93047 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:52.562985987+08:00\nmax timestamp not synced, ctx: region ID: 93046, meta: id:93046 start_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\003\004\031\257\251N\314\000\000\000\003\200\000\000\000\000\nK\341" end_key:"t\200\000\000\000\000\000\006\006_r\200\000\000\000\000\003\346\031" region_epoch:<conf_ver:1 version:488 > peers:<id:93047 store_id:1 > , peer: id:93047 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.064229374+08:00\nmax timestamp not synced, ctx: region ID: 93046, meta: id:93046 start_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\003\004\031\257\251N\314\000\000\000\003\200\000\000\000\000\nK\341" end_key:"t\200\000\000\000\000\000\006\006_r\200\000\000\000\000\003\346\031" region_epoch:<conf_ver:1 version:488 > peers:<id:93047 store_id:1 > , peer: id:93047 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.565749458+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 96760, meta: id:96760 start_key:"t\200\000\000\000\000\000\026\355" end_key:"t\200\000\000\000\000\000\026\355_r\200\000\000\000\000\003\n%" region_epoch:<conf_ver:1 version:1877 > peers:<id:96761 store_id:1 > , peer: id:96761 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:52.562926189+08:00\nmax timestamp not synced, ctx: region ID: 96760, meta: id:96760 start_key:"t\200\000\000\000\000\000\026\355" end_key:"t\200\000\000\000\000\000\026\355_r\200\000\000\000\000\003\n%" region_epoch:<conf_ver:1 version:1877 > peers:<id:96761 store_id:1 > , peer: id:96761 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.064263806+08:00\nmax timestamp not synced, ctx: region ID: 96760, meta: id:96760 start_key:"t\200\000\000\000\000\000\026\355" end_key:"t\200\000\000\000\000\000\026\355_r\200\000\000\000\000\003\n%" region_epoch:<conf_ver:1 version:1877 > peers:<id:96761 store_id:1 > , peer: id:96761 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.565740895+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] ["maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 194781, meta: id:194781 start_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\003\0018\000\000\000\000

| username: WalterWj | Original post link

It feels like there is an issue with the server clock.

| username: Kongdom | Original post link

A single node shouldn’t have any time inconsistency issues, right?

| username: WalterWj | Original post link

Has there been a time callback?

| username: Kongdom | Original post link

How do you verify this? Not sure if it happened.

| username: h5n1 | Original post link

How about trying to disable both tidb_enable_1pc and tidb_enable_async_commit?

| username: 像风一样的男子 | Original post link

A similar error was resolved by restarting.

| username: Kongdom | Original post link

:joy: This is my colleague, and this time the reboot didn’t work…
Additionally, we are using a physical server, not a virtual machine.

| username: Kongdom | Original post link

I’ll give it a try~

| username: dba远航 | Original post link

Check the PD and related communication parameters for any TSO anomalies in PD.

| username: Kongdom | Original post link

:thinking: It’s very strange, but it did work after the change. Let’s keep an eye on it for a while~

| username: Kongdom | Original post link

In theory, a single node shouldn’t have this issue. It might be that even with a single node deployment, internal communication is still conducted in cluster mode.

| username: Kongdom | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.