Questions about Physical Space Usage

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 物理空间占用疑问

| username: Kongdom

[TiDB Usage Environment] Production Environment
[Encountered Issue: Problem Phenomenon and Impact]

Using the df -Th command to check disk space, the maximum usage is 72G.

image

In Grafana, the chart shows the region size as 380G.

At the same time, it also shows that one node is down and another node is running out of space.

image

The remaining two nodes are continuously balancing. Is it because they both think they are running out of space? Is this a bug?

| username: WalterWj | Original post link

  1. The difference in display size should be a unit issue.
  2. You triggered low space; you can adjust the low and high ratios. You can check with pd-ctl.
| username: Kongdom | Original post link

However, the current physical environment has 800G of available space. I am worried that TiDB calculates based on a single replica of 300G. If it calculates based on 300G, then there will definitely be insufficient space.

| username: WalterWj | Original post link

Have you configured the capacity?

| username: db_user | Original post link

The store_region_size should not be the actual disk space size. In my cluster, each TiKV disk usage is 400G, but the store_region_size is 2T, which feels like a bit more than 5 times, similar to your situation.

| username: Kongdom | Original post link

Configured, the value is set to 18G, while the default should be 57G. Will reducing it have any impact?

| username: WalterWj | Original post link

I am asking about raftstore.capacity

| username: Kongdom | Original post link

This is not configured separately; the default value is used.

| username: WalterWj | Original post link

So theoretically, the default value is the size of the disk in your tikv data directory.

| username: WalterWj | Original post link

Moreover, your df graph does not match the size of your TiKV regions.

| username: WalterWj | Original post link

Screenshot this place:

| username: Kongdom | Original post link

It should be similar to this

| username: Kongdom | Original post link

The right side is prompting “low space”.

| username: WalterWj | Original post link

Your monitoring is strange. The total capacity is 2T, with 15G used. However, there are 338,336 regions. And the normal storage node is 0.

That’s incredible :thinking:

| username: WalterWj | Original post link

Has any operation been performed on this cluster? For example, br backup and restore or lightning local mode import.

| username: tidb菜鸟一只 | Original post link

Let’s see what’s going on here

| username: 我是咖啡哥 | Original post link

What about the disk space usage in the Dashboard?

| username: Kongdom | Original post link

I have done BR backups, but I don’t think I have done BR restores. There are a lot of empty regions.

I just upgraded to 5.4.3. Shortly after the upgrade, it showed normal here, but after a while, it turned back to 300G. It’s very strange.

| username: Kongdom | Original post link

This is normal here

| username: Kongdom | Original post link

The UI node has an issue, causing me to go offline. Now I can’t view the dashboard.