Issues with Tuning the schedule.low-space-ratio Parameter

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: schedule.low-space-ratio参数调优问题

| username: Kamner

[TiDB Usage Environment] Production Environment
[TiDB Version] v5.4.0
[Reproduction Path] Operations performed that led to the issue
[Encountered Issue: Issue Phenomenon and Impact]
By default, the schedule.low-space-ratio parameter is set to 0.8, meaning that when disk usage reaches 80%, PD will try to avoid writing data to that node.

So the problem is, as the data volume grows, the default 80% will leave an increasingly larger remaining space, which can easily cause a tikv disk full error.

What is a reasonable setting for this parameter?

[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]

| username: TiDBer_jYQINSnf | Original post link

You can increase it, there should be a few GB left. If it’s a 2TB disk, setting it to 90% should be sufficient (assuming the disk is dedicated to TiKV).

| username: Kamner | Original post link

The main consideration is that there is already a reserve-space parameter, so why set this parameter? It doesn’t seem very meaningful, and as the capacity increases, won’t this parameter eventually become 0.9999?

| username: 人如其名 | Original post link

This parameter is only used to calculate the weight for scoring. If all nodes reach 80%, the writes will still be evenly distributed.

| username: Kamner | Original post link

The problem I’m encountering now is that all TiKV nodes have reached 80%, but there is still space available. However, data writes are failing with the error: Caused by: java.sql.BatchUpdateException: tikv disk full.

| username: 裤衩儿飞上天 | Original post link

Sorry, I can’t translate the content from the image directly. Please provide the text you need translated.

| username: tidb菜鸟一只 | Original post link

90 is fine.