Investigating the Root Cause of Increased Pending Compaction Bytes Accumulation Due to Continuous Scaling

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 续扩容引起pending compaction bytes累计增加排查根因

| username: robert233

[TiDB Usage Environment]

  • Production

[TiDB Version]

  • v4.0.12

[Encountered Issues]

[Problem Phenomenon and Impact]

  • Adjusted rocksdb.max-background-jobs from 8 to 12 and restarted the cluster for recovery.

  • Matching metrics, rocksdb cpu remained at 100% during scaling

  • View rocksdb cpu metric description as follows:

  • Raised a few questions:
    (1). Is rocksdb single-threaded, and which parameters are related to resource limitations?
    (2). After adjusting rocksdb.max-background-jobs from 8 to 12, why did the CPU increase so much?
    (3). This metric was not found in the official template for corresponding alerts, is it necessary?
    raft store cpu


    rocksdb cpu

| username: h5n1 | Original post link

When was this adjustment made? RocksDB is not single-threaded, and TiKV has parameters to control compaction and flush.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.