TiKV Memory Tuning

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tikv内存调优

| username: Jolyne

[TiDB Usage Environment] Production Environment / Testing / PoC
Production Environment
[TiDB Version]
5.2.1
[Reproduction Path] What operations were performed when the issue occurred
Recently, we found that the TiKV node restarts every few hours. Upon investigation, we discovered that it was due to memory overflow causing OOM. The current TiKV configuration is:
1675921676380
The memory is 64G. Is it reasonable to set RocksDB’s default and write like this? What value should the unified read thread pool be set to?
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration]
[Attachments: Screenshots / Logs / Monitoring]

| username: Raymond | Original post link

Did you deploy TiKV together with other components?

| username: Jolyne | Original post link

No, each component is on a separate machine, but the disk performance is a bit poor.

| username: TiDBer_jYQINSnf | Original post link

Memtables also consume a lot of memory, so leave some space for them. You can check this out: TiKV 内存参数性能调优 | PingCAP 文档中心

| username: h5n1 | Original post link

| username: tidb菜鸟一只 | Original post link

It is recommended to set grpc-memory-pool-quota

  • The memory size limit that gRPC can use.
  • Default value: unlimited
  • It is recommended to limit memory usage only in cases of out-of-memory (OOM) issues. Note that limiting memory usage may cause stuttering.