TiKV Store Region score is very high, scaling down is very slow

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tikv Store Region score 评分很大,缩容很慢

| username: Holland

[TiDB Usage Environment] Production Environment / Test / Poc
Production
[TiDB Version]
6.5.1
[Reproduction Path] What operations were performed when the issue occurred
Scaled down one TiKV node from 6 KV nodes
[Encountered Issue: Issue Phenomenon and Impact]
Scaling down is very slow, Store Region score is very high. Region count stops changing after reaching 2
[Resource Configuration]
[Attachments: Screenshots / Logs / Monitoring]

| username: Holland | Original post link

What is the purpose of this Store Region score, and why is the score for store-4 so high?

| username: tidb菜鸟一只 | Original post link

Can I see the topology diagram? Is the node to be scaled down having poor resources?

| username: Holland | Original post link

All KV nodes are 20 cores and 128 GB.

| username: Holland | Original post link

The point to be scaled down already has a region count of 2, and it hasn’t changed for more than 6 hours.

| username: Holland | Original post link

Store 4’s Region score is ridiculously higher compared to the others.

| username: 裤衩儿飞上天 | Original post link

  1. Use pd ctl to check the status of the remaining two regions.
  2. Check the logs of the offline node.
| username: Holland | Original post link

Only store 4 is offline, the rest are up.

| username: tidb菜鸟一只 | Original post link

Take a look at the status of the remaining two regions on store4.

| username: Holland | Original post link

» region store 4
{
  "count": 2,
  "regions": [
    {
      "id": 1824825,
      "start_key": "7480000000000023FF6C5F72BD6859C19AFF28C2180000000000FA",
      "end_key": "7480000000000023FF6C5F72BD686A899AFF297EEB0000000000FA",
      "epoch": {
        "conf_ver": 100,
        "version": 16536
      },
      "peers": [
        {
          "id": 1824826,
          "store_id": 6,
          "role_name": "Voter"
        },
        {
          "id": 1824827,
          "store_id": 4,
          "role_name": "Voter"
        },
        {
          "id": 1824828,
          "store_id": 203,
          "role_name": "Voter"
        },
        {
          "id": 493879037,
          "store_id": 27,
          "role": 1,
          "role_name": "Learner",
          "is_learner": true
        }
      ],
      "leader": {
        "id": 1824826,
        "store_id": 6,
        "role_name": "Voter"
      },
      "pending_peers": [
        {
          "id": 493879037,
          "store_id": 27,
          "role": 1,
          "role_name": "Learner",
          "is_learner": true
        }
      ],
      "cpu_usage": 0,
      "written_bytes": 151,
      "read_bytes": 0,
      "written_keys": 2,
      "read_keys": 0,
      "approximate_size": 95,
      "approximate_keys": 227079
    },
    {
      "id": 2334539,
      "start_key": "7480000000000023FFA05F72FC00000002FF0FB3920000000000FA",
      "end_key": "7480000000000023FFA05F72FC00000002FF9EA1060000000000FA",
      "epoch": {
        "conf_ver": 138,
        "version": 26127
      },
      "peers": [
        {
          "id": 2334732,
          "store_id": 4,
          "role_name": "Voter"
        },
        {
          "id": 2334731,
          "store_id": 6,
          "role_name": "Voter"
        },
        {
          "id": 2334733,
          "store_id": 203,
          "role_name": "Voter"
        },
        {
          "id": 493878770,
          "store_id": 8,
          "role": 1,
          "role_name": "Learner",
          "is_learner": true
        }
      ],
      "leader": {
        "id": 2334733,
        "store_id": 203,
        "role_name": "Voter"
      },
      "pending_peers": [
        {
          "id": 493878770,
          "store_id": 8,
          "role": 1,
          "role_name": "Learner",
          "is_learner": true
        }
      ],
      "cpu_usage": 0,
      "written_bytes": 156,
      "read_bytes": 0,
      "written_keys": 2,
      "read_keys": 0,
      "approximate_size": 95,
      "approximate_keys": 284869
    }
  ]
}
| username: WalterWj | Original post link

Storing so much data on a single node?

| username: Holland | Original post link

Are you asking how much to store? This point has already been removed.

| username: WalterWj | Original post link

The image you provided is not accessible. Please provide the text content you need translated.

| username: Holland | Original post link

I also don’t know why this panel shows nearly 30T, but in fact, each KV is a 3.2T disk. You see, this panel is normal.

| username: Holland | Original post link

The reason for the high score is that I added store weight 4 0 0.