How to Quickly Balance Region Data When Adding a New KV Instance

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 新增kv实例,如何快速region数据均衡

| username: TiDBer_BrIoQ0NO

Production environment v5.4.0
There are a total of 3 KV instances, and now the disk space is insufficient. We plan to expand by adding 3 more KV instances. How can we speed up the region data balancing?
Would adjusting the following parameters help?

config set leader-schedule-limit 400
config set region-schedule-limit 20480000
config set max-pending-peer-count 64000
config set max-snapshot-count 64000
store limit all 200

Additionally,
The disk space on the old 3 KV instances is still increasing, and the remaining space is decreasing. Why is this happening? How can we prevent the old KV disks from becoming full?

| username: TiDBer_jYQINSnf | Original post link

The parameters you adjusted are correct. The disk increase is because RocksDB needs to compact before the disk size will reduce. Be patient, once the disk is full, TiKV will adjust the store’s score to prevent regions from being scheduled there.

If it really gets full, look for the TiKV directory, there is a file called space_placeholder_file. Deleting it can free up some space.

| username: forever | Original post link

If it’s slow, you can refer to the troubleshooting approach for TiKV decommissioning;

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. No new replies are allowed.