Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: tikv为什么大小不一样
[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version]
[Reproduction Path] What operations were performed that led to the issue
[Encountered Issue: Problem Phenomenon and Impact]
[Resource Configuration] Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots/Logs/Monitoring]
The space occupied by different TiKV nodes is inconsistent? What causes this issue and how can it be resolved?
Is the balance process not progressing? Are the store weights set differently? Check if there are issues with the labels.
First, confirm whether the disks of each KV are the same size, and then check the region distribution in the monitoring.
500G, the maximum and minimum difference in TiKV is about 100G of space. This doesn’t seem to be a normal range, right?
Take a look at the system tables.
Please share this table for us to take a look.
Hahaha, do you want to take a look at this tool: tidb-toolkit/scripts/tk_pdctl.py at main · realcp1018/tidb-toolkit · GitHub
Just fill in the IP and Port.
Different region distribution
Sometimes the number might be different, but the size will be similar.
If the TiKV disks are of different sizes, there might be some differences. Additionally, if the data volume is too small, that could also be a factor.
You need to check the working status of the balance scheduler, whether it is scheduling normally, and whether the scheduling strategy is reasonable. For example, whether the label settings of TiKV and PD are correct, whether the disk sizes are consistent, whether the available space is the same, whether the pressure is balanced, and the scoring of each instance node.
Once you have checked the above information, your problem should basically be resolved.
Are the host configurations the same?
If the difference is not significant, it is a normal phenomenon, as data cannot be evenly distributed in the true sense.
Relatively, but not absolutely.
Take a look at the settings.
After pushing PD, the message queue creates regions on each KV inconsistently, right?
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.