Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: 扩容3台kv节点。 如何加快region数据均衡
【TiDB Usage Environment】Production Environment
【TiDB Version】v5.4.0
【Encountered Problem】KV disk space is insufficient, added 3 new KV instances. However, region migration is very slow.
Adjusted leader-schedule-limit to 400 and region-schedule-limit to 20480000, but the effect is not very noticeable.
The maximum value for store limit all seems to be 200. I just tried setting it to 1500 and got the message:
» store limit all 1500
rate should be less than 200.000000 for all
It prompted, so that’s it.
Okay, thank you. I’ll make some adjustments and observe.
Increasing the parameters max-pending-peer-count and max-snapshot-count, will they occupy the disk space of the old KV?
I am now adjusting the following parameters:
config set leader-schedule-limit 400
config set region-schedule-limit 20480000
config set max-pending-peer-count 64000
config set max-snapshot-count 64000
store limit all 200
The disk space of the old 3 KV instances is still increasing, and the remaining space is decreasing.
Generating snapshot files and receiving snapshots will both take up disk space. Your settings are too high.
What size would be more appropriate?
This topic will be automatically closed 60 days after the last reply. No new replies are allowed.