Parameters for Controlling and Scheduling Node Disk Usage Limits?

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 控制和调度相关的节点磁盘使用上线的参数?

| username: Qiuchi

[TiDB Usage Environment] Production
[TiDB Version] 6.5.0
[Encountered Problem: Problem Phenomenon and Impact]
The disk usage of the cluster KV nodes has exceeded 80%. We plan to expand soon and noticed that since the days when it exceeded 80%, the cluster’s write performance seems to have declined. During large-scale writes, KV frequently experiences region leader drops. Initially, it seemed like the apply speed couldn’t keep up, but even after adjusting the apply and IO thread counts, the issue persisted. This morning, I happened to see a parameter that describes avoiding storing data on nodes that exceed a certain threshold. Therefore, I suspect that the leader drops are due to this issue, but I forgot the name of the parameter and can’t find it. If anyone knows, please write it down. Thank you.

| username: Billmay表妹 | Original post link

According to TiDB’s documentation, you might encounter the store limit parameter when adjusting the disk usage and write performance of the TiKV cluster. The store limit parameter is used to control the storage capacity of TiKV nodes. When a node’s storage capacity exceeds the set threshold, TiKV will try to avoid storing data on that node to prevent overload.

You can control the storage capacity of TiKV nodes by adjusting the store limit parameter, thereby optimizing the cluster’s write performance and avoiding leader drop situations. For specific parameter settings and adjustment methods, you can refer to the relevant TiKV documentation to ensure the stability and performance of the cluster.

| username: tidb菜鸟一只 | Original post link

SHOW config WHERE NAME LIKE ‘%low-space-ratio%’;
For this parameter, if you suspect that it is causing the issue and the cluster space cannot be expanded immediately, you can temporarily increase it to 0.9 to see if the problem is alleviated.

| username: Qiuchi | Original post link

Here it is, thanks.