The meaning of each thread obtained by TOP on the TIKV instance

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: TIKV实例上TOP获取到的每个线程的含义

| username: residentevil

[TiDB Usage Environment] Production Environment
[TiDB Version] V6.1.7
[Encountered Problem: Phenomenon and Impact] During the full data import stage online, executing top -H on the TIKV instance shows many rocksdb:low0~6 threads and apply-0~1 threads. Could you please explain what the rocksdb:low threads are doing? Currently, the data import speed is about 100,000 rows per second. Is it possible to optimize some configurations to increase the import speed? [Currently, hardware resources are very sufficient, with a single TIKV process using only around 4C of resources]
[Attachment: Screenshot/Log/Monitoring]

| username: 昵称想不起来了 | Original post link

You can refer to

| username: xfworld | Original post link

Observing through Grafana will be more intuitive…

If you want to stress test, consider the scenarios provided by the stress testing tools…

| username: residentevil | Original post link

The main issue now is that we don’t know where the bottleneck is. This particular SQL write path is quite long, and we haven’t seen any specific stage hitting a bottleneck based on the monitoring data.

| username: residentevil | Original post link

  1. It seems that the threads rocksdb:low are related to compaction and flushing memtable.
  2. I’ll take another look at that document regarding the slow write performance issue. It looks quite detailed. Thank you.
| username: Fly-bird | Original post link

OP, try to see if there are any configurations that can improve the ingestion speed.

| username: xfworld | Original post link

There are quite a few points that need to be checked. Refer to this:

| username: residentevil | Original post link

Analyzing each monitor one by one to see which one is slowing down.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.