TiKV Memory Control

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: TiKV内存控制

| username: 江湖故人

To prevent the OS from running out of memory, should we set the block-cache, write-buffer, and memory-usage-limit?

| username: Jellybean | Original post link

By default, it won’t use up the system either. You can check the official website to confirm whether the default is 60% or 80%.

| username: hey-hoho | Original post link

Generally, setting the block cache is sufficient, especially in mixed deployment scenarios.

| username: 春风十里 | Original post link

In TiKV, you need to configure RocksDB’s block cache based on the machine’s memory size to fully utilize the memory. If not configured, there are default settings. All CFs (Column Families) by default share a single block cache instance. By setting the capacity parameter under [storage.block-cache], you can configure the size of this block cache. The larger the block cache, the more hot data it can cache, making data reading easier, but it also occupies more system memory.

TiKV Configuration File Description | PingCAP Documentation Center


Configuration options for sharing block cache among multiple CFs in RocksDB.


  • The size of the shared block cache.
  • Default values:
    • When storage.engine="raft-kv", the default value is 45% of the total system memory.
    • When storage.engine="partitioned-raft-kv", the default value is 30% of the total system memory.
  • Units: KB|MB|GB

TiKV Memory Parameter Performance Tuning | PingCAP Documentation Center

| username: tidb菜鸟一只 | Original post link

Generally, setting storage.block-cache.capacity alone is sufficient. If there is only one TiKV node on the machine, set it to around 45% of the total memory.

| username: 江湖故人 | Original post link

Thanks to all the experts! I always feel that setting all the values seems a bit strange. The official documentation says that in some deployment modes, memory-usage-limit is calculated from block-cache.

Memory management in Oracle is as follows:

  1. When automatic memory management is enabled, set memory_target to a non-zero value (similar to memory-usage-limit);
  2. When automatic shared memory management is enabled, set sga_target to a non-zero value and memory_target to 0;
  3. When manual shared memory management is used, both of the above targets are set to 0, and values for other memory components are specified (similar to block-cache, write-buffer), which is generally not recommended.
| username: TIDB-Learner | Original post link

Block cache also resolved my confusion, :slightly_smiling_face:

| username: zhanggame1 | Original post link

I have deployed multiple TiKV nodes in a mixed environment. In version 7.5, the actual test shows that the value of block-cache.capacity * 1.25 is the memory limit and will not be exceeded.

Configured storage.block-cache.capacity: 20G, then the limit is 25G. In practice, you can see that none of the three TiKV nodes reach 25G.
Here’s a screenshot:

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.