Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: pd监控里压缩比为什么是0.01,不是应该大于1?
[Test Environment for TiDB] Testing
[TiDB Version] v.6.5
[Observed Phenomenon]

This is my test environment with very little data. Why is the storage compression ratio 0.01? Has it expanded? Does anyone have any insights?
123G is the available hard disk size, and the right side is the actual usage.
Storage capacity is your total available capacity, current is how much you are currently using.
Size amplification mainly refers to this diagram 
Uh. What I want to ask about is the picture below; the picture above just indicates that the test environment has a very small amount of data.
Open it and take a look, how is this metric configured?
This value is variable, and as the data gradually increases, this value will gradually exceed 1.
There is still no multi-layer for large amounts of data, right?
How does this compression ratio compare to others? How was your data generated? Normally, it should be 1+.
sum(pd_scheduler_store_status { type="region_size"}) by (address, store)
/ sum(pd_scheduler_store_status {type="store_used"}) by (address, store) * 2^20
You can test it by writing some data randomly, and this is the result. I also think it’s 1+.
Does it have anything to do with multi-layer?
Is there a detailed explanation?
Where can I find reference materials?
It is related to the size of the data. The size of the data determines the level it enters in RocksDB.
The link you provided leads to a Zhihu article. Please provide the text you need translated, and I will translate it for you.
To add,
The data stored at levels L0\L1\L2\L3\L4, by default, levels 0 and 1 should be uncompressed.