Several Issues with tikv_number_files_at_each_level

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tikv_number_files_at_each_level 的几个问题

| username: banana_jian

Regarding the questions about tikv_number_files_at_each_level:

  1. Is the number of files at each level in the monitoring the current number of files?
  2. Is the last level in the level items level 6?
  3. Why didn’t I capture any changes in the number of files in levels 1-5 during my tests, they always remained at 0?
  4. Why does the number of files in level 6 decrease?

Thanks in advance for your help.

| username: h5n1 | Original post link

  1. It refers to the number of files, the number of files for each CF.
  2. The default number of levels is controlled by the num_levels parameter, which is 7 by default. In numerical order, it is 6. On some monitoring panels, it is directly called the bottommost level.
  3. Try setting the monitoring unit value to none and see if the data volume is sufficient.
    image
  4. If there is a large amount of deleted data, the number of files at the bottommost level may decrease during compaction.
| username: banana_jian | Original post link

Thank you.
3. Try setting the monitoring unit value to none and see if the data volume is sufficient.
– When I tested, there was data in level 0, but strangely, it seemed to skip levels 1, 2, 3, 4, and 5, and the number of files in level 6 increased directly.
4. If there is a large amount of data deletion, the number of files at the lowest level may decrease during compaction.
– Are you saying that if there is no data deletion, the number of files in level 6 will keep increasing? Also, I would like to ask if there is any correlation between the number of these files and the final SST files on the disk?

| username: banana_jian | Original post link

When I was doing the test, why didn’t I capture any changes in the level1-5 files? They always remained at 0.
– I also checked the tables in the database, and they are all 0 as well.
| 2022-07-06 09:57:13.135000 | 192.168.135.149:20180 | default | 0 | kv | 2 |
| 2022-07-06 09:58:13.135000 | 192.168.135.149:20180 | default | 0 | kv | 2 |
| 2022-07-06 09:59:13.135000 | 192.168.135.149:20180 | default | 0 | kv | 2 |
| 2022-07-06 10:00:13.135000 | 192.168.135.149:20180 | default | 0 | kv | 3 |
| 2022-07-06 09:50:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:51:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:52:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:53:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:54:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:55:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:56:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:57:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:58:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:59:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 10:00:13.135000 | 192.168.135.149:20180 | default | 1 | kv | 0 |
| 2022-07-06 09:50:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:51:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:52:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:53:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:54:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:55:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:56:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:57:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:58:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:59:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 10:00:13.135000 | 192.168.135.149:20180 | default | 2 | kv | 0 |
| 2022-07-06 09:50:13.135000 | 192.168.135.149:20180 | default | 6 | kv | 2 |
| 2022-07-06 09:51:13.135000 | 192.168.135.149:20180 | default | 6 | kv | 2 |
| 2022-07-06 09:52:13.135000 | 192.168.135.149:20180 | default | 6 | kv | 2 |
| 2022-07-06 09:53:13.135000 | 192.168.135.149:20180 | default | 6 | kv | 2 |
| 2022-07-06 09:54:13.135000 | 192.168.135.149:20180 | default | 6 | kv | 2 |

| username: h5n1 | Original post link

It seems there is a feature where during large-scale data insertion, the process does not go through flush/compact but directly inserts SST files into the lowest level.

If the data in the lowest level files is not deleted, it will keep increasing. The number of files corresponds to the SST files on the disk.

| username: banana_jian | Original post link

It seems that there is a feature where during large batch inserts, the process does not go through flush/compact, but directly inserts the SST files into the bottom layer.

-----Could you please let me know if this is mentioned in the official documentation? I would like to learn more about it.

| username: ddhe9527 | Original post link

LevelDB has this feature, and RocksDB is built on the basis of LevelDB, so it should also have this feature. You can search for the function PickLevelForMemTableOutput in the document below:
https://zhuanlan.zhihu.com/p/51573929

| username: h5n1 | Original post link

Sorry, I can’t assist with that.

| username: HTAP萌新 | Original post link

Can this feature of directly flushing to the last level be controlled by a parameter?

| username: HTAP萌新 | Original post link

I also encountered this problem when using Lightning to import data in bulk. Did you later try to prevent it from flushing directly to the sixth level?

| username: h5n1 | Original post link

There should be none.

| username: banana_jian | Original post link

None :disappointed:

| username: system | Original post link

This topic was automatically closed 1 minute after the last reply. No new replies are allowed.