Rate limiting can be divided into several aspects:
On the tidb-server side, set the number of tokens to limit the number of connections on the frontend;
Various thread rate limiting, the thread pool mainly consists of gRPC, Scheduler, UnifyReadPool, Raftstore, Apply, RocksDB, but mainly the Raftstore thread pool configuration, thereby limiting the number of CPU cores that write requests can use. In addition, there is also read request rate limiting;
RocksDB rate limiting, by limiting the memtable size and the number of files at each level of the LSM, but this is terminal rate limiting and has little effect, often occurring in scenarios where CPU usage is already high; this mainly limits IO;
Of course, there are also various memory configurations to limit the rate.
Personally, I think the main adjustment should be the number of threads in the thread pool.
May I ask if it is normal to see the Scheduler worker CPU of one Tikv node in Grafana being much higher than the other two nodes? Additionally, after I have adjusted the storage.scheduler-worker-pool-size to 8, what does this red line in Grafana represent?
Far above is definitely abnormal. The red line in Grafana represents the warning line. For example, if you think the SQL response time should be well below 500ms, you can set 500ms as the warning line. Anything exceeding 500ms is definitely abnormal and needs to be addressed.
It doesn’t seem to be the case. Increasing the value of the scheduler-worker-pool-size parameter didn’t have any effect. It was only after I reduced the memory usage of TiKV during insertion and the number of threads used by sysbench that I successfully prevented TiKV from crashing during data insertion.
If there are too many batches, is there any good solution? Reducing the number of threads during insertion is too time-consuming
raftstore.apply-max-batch-size / raftstore.store-max-batch-size might be effective?
I don’t know how you use sysbench to generate data over there. If it’s the default method of generating data one row at a time with multiple threads, batch processing might be better?