When using sysbench to stress test TiDB, the CPU usage of one node does not increase

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 用sysbench 压测tidb的时候某一台节点的cpu 使用率上不去

| username: Raymond

Version: v6.1.0
Using Hygon CPU + Kylin system, we conducted performance testing with sysbench connecting to HAProxy. Three machines were deployed with 2 TiDB, 1 TiKV, and 1 PD each, with consistent CPU, memory, and storage resources. We found that when stress testing with more than 100 concurrent threads, the CPU usage of one machine couldn’t go up. For example, the average CPU usage of machine A was 70%, while machine C’s average CPU usage was 35%. Moreover, the CPU usage of both the TiDB server and TiKV node on machine C couldn’t go up. After analysis:

  1. No components crashed during the test.
  2. There were no obvious read/write hotspots.
  3. The number of regions and leaders on the three TiKV nodes remained consistent.
  4. The number of connections to each TiDB server was consistent.
  5. By directly connecting to the TiDB server on machine C for testing, we found that the CPU usage on machine C could go up.

Could the experts please help to identify what might be causing this issue?

| username: xfworld | Original post link

Please provide a diagram. Which node is A, and which node is C?

What strategy is haproxy using?

| username: Raymond | Original post link

haproxy uses the least connections, so the number of connections for each TiDB server node is equal.

| username: jansu-dev | Original post link

Suggestions:

  1. Check what types of SQL are consuming less CPU, modify the OPM panel, and view SQL types by instance;
  2. Check if the CPU frequency of the servers with lower CPU consumption is higher than that of other machines. If this is the cause, you can directly adjust the forwarding ratio in HA.

If neither is the case, you may need to look into it more specifically. It is recommended to capture clinic data, as blind guessing may be inefficient.