Select or set operations report an error: ERROR 9006 (HY000): GC life time is shorter than transaction duration, transaction starts at 2023-10-31 08:55:09.646 +0000 UTC, GC safe point is 2140-03-23 09:32:10.849 +0000 UTC

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: select或者set操作都报错: ERROR 9006 (HY000): GC life time is shorter than transaction duration, transaction starts at 2023-10-31 08:55:09.646 +0000 UTC, GC safe point is 2140-03-23 09:32:10.849 +0000 UTC

| username: Zz_zZ

[TiDB Usage Environment] Production Environment / Testing / Poc
[TiDB Version] TiDB-v5.2.2
[Reproduction Path] Unable to reproduce in the testing environment for now
[Encountered Issue: Problem Phenomenon and Impact]:
Executing SQL statements like select and set results in error 9006. No similar 9006 issues were found in the forum. Due to the inability to set, the tidb_gc_life_time variable cannot be modified.
Enabling TiDB debug logs shows:

[2023/10/31 09:25:36.411 +00:00] [DEBUG] [ddl_worker.go:179] ["[ddl] wait to check DDL status again"] [worker="worker 2, tp add index"] [interval=1s]
[2023/10/31 09:25:36.412 +00:00] [DEBUG] [ddl_worker.go:179] ["[ddl] wait to check DDL status again"] [worker="worker 1, tp general"] [interval=1s]
[2023/10/31 09:25:36.412 +00:00] [DEBUG] [ddl.go:220] ["[ddl] check whether is the DDL owner"] [isOwner=true] [selfID=03f10a9b-0eb2-497c-9cc8-6ae9ff12db2a]
[2023/10/31 09:25:36.412 +00:00] [DEBUG] [ddl.go:220] ["[ddl] check whether is the DDL owner"] [isOwner=true] [selfID=03f10a9b-0eb2-497c-9cc8-6ae9ff12db2a]
[2023/10/31 09:25:36.412 +00:00] [DEBUG] [txn.go:431] ["[kv] rollback txn"] [txnStartTS=445315635319930881]
[2023/10/31 09:25:36.412 +00:00] [WARN] [ddl_worker.go:199] ["[ddl] handle DDL job failed"] [worker="worker 2, tp add index"] [error="[tikv:9006]GC life time is shorter than transaction duration, transaction starts at 2023-10-31 09:25:36.395 +0000 UTC, GC safe point is 2140-03-23 09:32:10.849 +0000 UTC"]

[Resource Configuration]:
cpu: 2
memory: 8Gi

pd.log (43.9 KB)
tidb.log (10.6 MB)
tikv.log (15.6 MB)

| username: Soysauce520 | Original post link

Is the GC process time normal in the monitoring?

| username: Miracle | Original post link

What is the current GC value?

| username: 大飞哥online | Original post link

Check if there is a DDL hanging.

| username: Zz_zZ | Original post link

Thank you for your attention~


This is the situation on the dashboard.

This is the real-time monitoring from the metrics interface query.
metric.log (446.7 KB)

| username: Zz_zZ | Original post link

Thank you for your attention~


This is the situation of the Grafana dashboard.

I don’t quite understand these metrics. It looks like the job latency distribution is mostly in the high latency part.

# HELP tidb_ddl_deploy_syncer_duration_seconds Bucketed histogram of processing time (s) of deploy syncer
# TYPE tidb_ddl_deploy_syncer_duration_seconds histogram
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.001"} 0
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.002"} 0
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.004"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.008"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.016"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.032"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.064"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.128"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.256"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="0.512"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="1.024"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="2.048"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="4.096"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="8.192"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="16.384"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="32.768"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="65.536"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="131.072"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="262.144"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="524.288"} 1
tidb_ddl_deploy_syncer_duration_seconds_bucket{result="ok",type="init",le="+Inf"} 1
tidb_ddl_deploy_syncer_duration_seconds_sum{result="ok",type="init"} 0.002846399
tidb_ddl_deploy_syncer_duration_seconds_count{result="ok",type="init"} 1
# HELP tidb_ddl_update_self_ver_duration_seconds Bucketed histogram of processing time (s) of update self version
# TYPE tidb_ddl_update_self_ver_duration_seconds histogram
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.001"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.002"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.004"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.008"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.016"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.032"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.064"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.128"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.256"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="0.512"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="1.024"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="2.048"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="4.096"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="8.192"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="16.384"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="32.768"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="65.536"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="131.072"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="262.144"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="524.288"} 1
tidb_ddl_update_self_ver_duration_seconds_bucket{result="ok",le="+Inf"} 1
tidb_ddl_update_self_ver_duration_seconds_sum{result="ok"} 0.000376232
tidb_ddl_update_self_ver_duration_seconds_count{result="ok"} 1
# HELP tidb_ddl_worker_operation_total Counter of creating ddl/worker and isowner.
# TYPE tidb_ddl_worker_operation_total counter
tidb_ddl_worker_operation_total{type="create_ddl_instance"} 1
tidb_ddl_worker_operation_total{type="create_ddl_worker 1, tp general"} 1
tidb_ddl_worker_operation_total{type="create_ddl_worker 2, tp add index"} 1
tidb_ddl_worker_operation_total{type="owner_v5.2.2"} 13258
tidb_ddl_worker_operation_total{type="start_clean_work"} 1
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.0005"} 7026
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.001"} 13168
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.002"} 13221
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.004"} 13237
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.008"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.016"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.032"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.064"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.128"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.256"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="0.512"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="1.024"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="2.048"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="4.096"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="8.192"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="16.384"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="32.768"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="65.536"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="131.072"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="262.144"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="524.288"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="1048.576"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="2097.152"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="4194.304"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="8388.608"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="16777.216"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="33554.432"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="67108.864"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="134217.728"} 13240
tidb_meta_operation_duration_seconds_bucket{result="err",type="get_ddl_job",le="+Inf"} 13240
tidb_meta_operation_duration_seconds_sum{result="err",type="get_ddl_job"} 6.707527507000024
tidb_meta_operation_duration_seconds_count{result="err",type="get_ddl_job"} 13240
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.0005"} 16
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.001"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.002"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.004"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.008"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.016"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.032"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.064"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.128"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.256"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="0.512"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="1.024"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="2.048"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="4.096"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="8.192"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="16.384"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="32.768"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="65.536"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="131.072"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="262.144"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="524.288"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="1048.576"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="2097.152"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="4194.304"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="8388.608"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="16777.216"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="33554.432"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="67108.864"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="134217.728"} 18
tidb_meta_operation_duration_seconds_bucket{result="ok",type="get_ddl_job",le="+Inf"} 18
tidb_meta_operation_duration_seconds_sum{result="ok",type="get_ddl_job"} 0.007464113
tidb_meta_operation_duration_seconds_count{result="ok",type="get_ddl_job"} 18
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.0005"} 0
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.001"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.002"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.004"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.008"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.016"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.032"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.064"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.128"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.256"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="0.512"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9ed6-b7630eebcd39",le="1.024"} 1
tidb_owner_new_session_duration_seconds_bucket{result="ok",type="[ddl-syncer] /tidb/ddl/all_schema_versions/72a55cc4-fa94-4dd2-9
| username: Zz_zZ | Original post link

Refer to the article to collect GC-related information: 【SOP 系列 25】GC 常见问题排查 - TiDB 的问答社区

GC parameter configuration, select error, unable to retrieve TiDB table

MySQL [(none)]> select * from mysql.tidb;
ERROR 9006 (HY000): GC life time is shorter than transaction duration, transaction starts at 2023-10-31 12:09:23.145 +0000 UTC, GC safe point is 2140-03-23 09:32:10.849 +0000 UTC
MySQL [(none)]>
MySQL [(none)]>
MySQL [(none)]>
MySQL [(none)]> show variables like '%gc%';
+------------------------+--------+
| Variable_name          | Value  |
+------------------------+--------+
| tidb_gc_concurrency    | -1     |
| tidb_gc_enable         | ON     |
| tidb_gc_life_time      | 10m0s  |
| tidb_gc_run_interval   | 10m0s  |
| tidb_gc_scan_lock_mode | LEGACY |
+------------------------+--------+
5 rows in set (0.007 sec)

Check the gc-worker logs in the TiDB log

[2023/10/31 10:16:26.140 +00:00] [INFO] [gc_worker.go:197] ["[gc worker] start"] [uuid=62e14f023bc0002]
[2023/10/31 10:16:26.144 +00:00] [DEBUG] [gc_worker.go:1826] ["[gc worker] load kv"] [key=tikv_gc_leader_uuid] [value=629bd7ea8d80002]
[2023/10/31 10:16:26.144 +00:00] [DEBUG] [gc_worker.go:1684] ["[gc worker] got leader"] [uuid=629bd7ea8d80002]
[2023/10/31 10:16:26.146 +00:00] [WARN] [gc_worker.go:276] ["[gc worker] check leader"] [error="inconsistent index PRIMARY handle count 1 isn't equal to value count 0"]
[2023/10/31 10:17:26.148 +00:00] [WARN] [gc_worker.go:276] ["[gc worker] check leader"] [error="[tikv:9006]GC life time is shorter than transaction duration, transaction starts at 2023-10-31 10:17:26.145 +0000 UTC, GC safe point is 2140-03-23 09:32:10.849 +0000 UTC"]
[2023/10/31 10:18:26.148 +00:00] [WARN] [gc_worker.go:276] ["[gc worker] check leader"] [error="[tikv:9006]GC life time is shorter than transaction duration, transaction starts at 2023-10-31 10:18:26.146 +0000 UTC, GC safe point is 2140-03-23 09:32:10.849 +0000 UTC"]

Check service-gc-safepoint

{
  "service_gc_safe_points": [
    {
      "service_id": "gc_worker",
      "expired_at": 9223372036854775807,
      "safe_point": 1408180297622880256
    }
  ],
  "gc_safe_point": 1408180297622880256
}
| username: Miracle | Original post link

Isn’t the time of this GC safe point a bit out of sync…?

| username: 大飞哥online | Original post link

Check if the cluster time synchronization is normal.

| username: Zz_zZ | Original post link

The production environment is a single-machine environment, and there may be time synchronization issues. I will go on-site tomorrow to check the system logs and look into the ntpdate operation.

| username: Zz_zZ | Original post link

I copied the PD and TiKV data back to the test environment, and the time in the test environment is normal. I suspect that there might have been a time inconsistency before.

| username: 有猫万事足 | Original post link

Your timestamp has indeed gone to the year 2140. :joy:

| username: TiDBer_小阿飞 | Original post link

Haha! Only 117 years to go, it’s not that far, just something for the next next generation :grinning:

| username: Zz_zZ | Original post link

:man_facepalming: This TSO 专栏 - PD的时钟服务——TSO | TiDB 社区 In cases where the time is ahead, how can it be restored? It can’t be forcibly modified, right?

  1. If you change the system time to 2140, will it recover? After it returns to normal, change it back to the correct time. This kind of time travel will make PD crash even more thoroughly, right? :joy:
  2. Or record the PD’s cluster ID and forcibly rebuild PD using this document: PD Recover 使用文档 | PingCAP 文档中心?
| username: 有猫万事足 | Original post link

I also haven’t found a way to modify the safepoint.
If it is exceeded, it is certain that subsequent queries and modifications will not succeed. The previous data should still be there, but to adjust the safepoint back, it might be necessary to rebuild the cluster and re-import the data.

Your second suggestion is worth trying, but I haven’t actually done it. So I’m not sure what the consequences would be.

| username: Zz_zZ | Original post link

Tried the pd-recover metadata recovery solution in the test environment, and the initial test was successful.

Get pd-id

cat pd.log |grep "init cluster id"
[2023/11/01 07:35:58.449 +00:00] [INFO] [server.go:351] ["init cluster id"] [cluster-id=7275302868813208333]

Stop pd, tikv, tidb, and then delete pd data

rm -rf /mnt/locals/tidb-pd/volume0/*

When starting new pd and tikv, pd reinitializes, tikv reports an error

["failed to bootstrap node id: \"[src/server/node.rs:236]: cluster ID mismatch, local 7275302868813208333 != remote 7296392347708071649, you are trying to connect to another cluster, please reconnect to the correct PD\""]

Use pd-recover to restore cluster-id

wget https://download.pingcap.org/tidb-community-toolkit-v5.2.2-linux-amd64.tar.gz
tar zxf tidb-community-toolkit-v5.2.2-linux-amd64.tar.gz
cd  tidb-community-toolkit-v5.2.2-linux-amd64
./bin/pd-recover -endpoints http://127.0.0.1:2379 -cluster-id 7275302868813208333 -alloc-id 10000

Then restart pd and tikv, both start normally, check pd’s safe-point

./pd-ctl service-gc-safepoint
{
  "service_gc_safe_points": [],
  "gc_safe_point": 0
}

Enter tidb to execute select query, no longer reports error 9006

 mysql -h 127.0.0.1 -P 4000 -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 403
Server version: 5.7.25-TiDB-v5.2.2 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> select * from mysql.tidb;
+--------------------------+---------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| VARIABLE_NAME            | VARIABLE_VALUE                                                                                    | COMMENT                                                                                     |
+--------------------------+---------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| bootstrapped             | True                                                                                              | Bootstrap flag. Do not delete.                                                              |
| tidb_server_version      | 72                                                                                                | Bootstrap version. Do not delete.                                                           |
| system_tz                | UTC                                                                                               | TiDB Global System Timezone.                                                                |
| new_collation_enabled    | False                                                                                             | If the new collations are enabled. Do not edit it.                                          |
| tikv_gc_leader_uuid      | 629bd7ea8d80002                                                                                   | Current GC worker leader UUID. (DO NOT EDIT)                                                |
| tikv_gc_leader_desc      | host:tidb-default-tidb-0, pid:1, start at 2023-09-07 11:20:55.101415326 +0000 UTC m=+42.568797401 | Host name and pid of current GC leader. (DO NOT EDIT)                                       |
| tikv_gc_enable           | true                                                                                              | Current GC enable status                                                                    |
| tikv_gc_run_interval     | 10m0s                                                                                             | GC run interval, at least 10m, in Go format.                                                |
| tikv_gc_life_time        | 10m0s                                                                                             | All versions within life time will not be collected by GC, at least 10m, in Go format.      |
| tikv_gc_auto_concurrency | true                                                                                              | Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used |
| tikv_gc_scan_lock_mode   | legacy                                                                                            | Mode of scanning locks, "physical" or "legacy"                                              |
| tikv_gc_mode             | distributed                                                                                       | Mode of GC, "central" or "distributed"                                                      |
+--------------------------+---------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
12 rows in set (0.002 sec)

But there is one last question: pd starts waiting for multiple GCs, and after 10 minutes, this safe-point is still empty. When will this be triggered to update?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.