Currently, when scaling down 3 out of 8 TiKV nodes simultaneously, a large number of offline-peer-region-counts appear

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 目前tikv8台缩容同时3台,出现大量offline-peer-region-count

| username: xingzhenxiang

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version]
[Reproduction Path] Scale down 3 machines
[Encountered Problem: Problem Phenomenon and Impact]
[Resource Configuration]
[Attachment: Screenshot/Log/Monitoring]

I would like to know if the situation shown in the picture is normal after performing this operation, and does it affect data integrity?

| username: h5n1 | Original post link

This is normal. During the scaling down process, regions will migrate and go through an offline state. Once the migration is complete, the number of regions in these states will decrease. It is still recommended to scale down one machine at a time.

| username: tidb菜鸟一只 | Original post link


| username: xingzhenxiang | Original post link

Is my understanding correct that regions on TiKV nodes in Pending Offline status can still be scheduled by PD?

| username: h5n1 | Original post link

“Pending offline” means that the region peer above is being scheduled to another node.

| username: xingzhenxiang | Original post link

What about the Tombstone status? Has the data already been scheduled? Even if there is data in the tikv directory, it is no longer scheduled, or can it be cleaned up since it is no longer useful?

| username: h5n1 | Original post link

The region has been scheduled and can be cleaned up, tiup cluster prune

| username: xingzhenxiang | Original post link

Okay, thank you.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.