How to quickly balance data after TiKV scaling, and how to set 3 replicas to 5 replicas

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 请问下tikv扩缩容后数据怎么平衡得快,3副本数据怎么设置为5副本

| username: TiDBer_Y2d2kiJh

[TiDB Usage Environment] Production Environment
[TiDB Version] v5.4.0 2tidb 3pd 3tikv
[Reproduction Path] Due to the need to replace SSDs, we plan to replace the 3 TiKV nodes through scaling in and out. Is there any way to balance the data faster? How to configure the data from 3 replicas to 5 replicas.
[Encountered Issues: Issue Phenomenon and Impact]
[Resource Configuration] Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots/Logs/Monitoring]

| username: tidb菜鸟一只 | Original post link

Set the following parameters:
config set region-schedule-limit 2
config set replica-schedule-limit 4
store limit all 5 // Set the speed limit for adding and removing peers for all stores to 5 per minute.
To change 3 replicas to 5 replicas, directly modify max-replicas=5.