How to Handle Offline Peer Regions in TiDB

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb 中offline peer region 如何处理

| username: TiDBer_yyy

[TiDB Usage Environment] Production Environment
[TiDB Version] 5.0.1

| username: tidb菜鸟一只 | Original post link

Did you perform any operations on the TiKV node?

| username: TiDBer_yyy | Original post link

A long time ago, we expanded the capacity, and before that, no operations were performed for a year.

| username: tidb菜鸟一只 | Original post link

Did you scale down after scaling up? Generally, after scaling down, there will be down-state peers generated from the offline TiKV nodes. As the peers gradually migrate to other nodes, they should eventually become 0 once the TiKV node scaling down is completed.

| username: TiDBer_yyy | Original post link

A long time ago, I did a scale-in, probably about a year ago. I didn’t pay attention to this page at that time.

| username: tidb菜鸟一只 | Original post link

SELECT * FROM INFORMATION_SCHEMA.tikv_region_peers a WHERE a.status='DOWN';

Check which peers are down, and then check if the corresponding region replicas are sufficient through region_id. If they are sufficient, these peers are useless.

SELECT * FROM INFORMATION_SCHEMA.tikv_region_peers a WHERE a.region_id='476';
| username: TiDBer_yyy | Original post link

The result is empty

| username: tidb菜鸟一只 | Original post link

Try executing region check down-peer in pdctl. If there are no issues, then the cached data in the monitoring system might be problematic.

| username: TiDBer_yyy | Original post link

  • The result is empty. If it’s a cache issue, how can I clear it?
» region check down-peer
{
  "count": 0,
  "regions": []
}
| username: yilong | Original post link

Is there any information if you check offline-peer? Can you see the specific region information?

| username: TiDBer_yyy | Original post link

Yes, there are quite a few. How should we handle it?

{
      "id": 6022449,
      "start_key": "7480000000000001FF155F728000000009FF5F5BB90000000000FA",
      "end_key": "7480000000000001FF155F728000000009FF610AC30000000000FA",
      "epoch": {
        "conf_ver": 2873,
        "version": 9824
      },
      "peers": [
        {
          "id": 6022450,
          "store_id": 5
        },
        {
          "id": 6022451,
          "store_id": 8
        },
        {
          "id": 6022452,
          "store_id": 6
        }
      ],
      "leader": {
        "id": 6022452,
        "store_id": 6
      },
      "written_bytes": 0,
      "read_bytes": 0,
      "written_keys": 0,
      "read_keys": 0,
      "approximate_size": 3,
      "approximate_keys": 81920
    }