How to Replace Servers for Three TiKV Instances

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 我想把三台tikv换服务器该怎么操作

| username: TiDBer_7Q5CQdQd

The last three TiKV nodes are the ones I just scaled out. But I found that after scaling out, there isn’t much data in the /root/tidb/tidb-data/tikv-20160 directory, which makes me hesitant to scale in the three TiKV nodes above.
P.S.: TiKV is for storing data, right?

| username: zhanggame1 | Original post link

Wait a few days and then check again, see the number of regions on each TiKV.

| username: TiDBer_7Q5CQdQd | Original post link

Does it really take several days? How can we speed it up?

| username: zhanggame1 | Original post link

You can speed it up by adjusting some parameters, but if the data volume is not very large, it usually takes just one day to synchronize.

| username: tidb菜鸟一只 | Original post link

Setting the following parameters: the larger the values, the faster the region balance moves, but it also has a greater impact on the current business.
config set region-schedule-limit 2
config set replica-schedule-limit 4
store limit all 5

| username: Hacker007 | Original post link

Scaling down will synchronize data to other nodes.

| username: Fly-bird | Original post link

Wait until the data is balanced before scaling down.

| username: 普罗米修斯 | Original post link

  1. Use pd-ctl to adjust scheduling parameters;
  2. Observe whether the leader-count and region-count are balanced in Grafana;
  3. After balancing, it is recommended to scale down TiKV nodes one by one to avoid the issue of multiple region replicas being lost.
| username: zhanggame1 | Original post link

How is the progress going?