Execution Plan

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 执行计划

| username: 胡杨树旁

[TiDB Usage Environment] Production Environment
[TiDB Version] v5.1.2

When checking the dashboard, I found that the main execution time is Coprocessor execution time = total execution time. I manually executed it and the execution time was at the millisecond level. I don’t know why there is such a big time difference before and after.

Also, the selectivity of this condition is quite good, the query only returns one piece of data, and it is still within the index range scan of order_no.

| username: WalterWj | Original post link

Check if the TiKV nodes were under heavy load at that time, which might have caused the slowness.

| username: 胡杨树旁 | Original post link

I would like to ask where to check the pressure on this TiKV node. The query frequency of this table is quite high.

| username: WalterWj | Original post link

In your screenshot, the coprocessor is the time taken by TiKV to scan data. This might be caused by high pressure on TiKV. You can check if the CPU of TiKV was fully utilized during the slow SQL period to confirm this.

The slow SQL might just be due to the server’s resource usage being too high.

| username: tidb菜鸟一只 | Original post link

In the overview, let’s first check the CPU and memory usage of the TiKV nodes.

| username: 胡杨树旁 | Original post link

Is it this place?

| username: buddyyuan | Original post link

It looks like there is an imbalance, with hotspots.
Manually executed the execution time at the millisecond level, is the execution plan the same when executed manually?

| username: WalterWj | Original post link

You can look at this area. How many CPUs did you allocate to your read pool?
The usage rate generally does not exceed 80% of the allocated amount.

| username: 胡杨树旁 | Original post link

The value of read pool 20 for a dual-instance server with a 64-core CPU is also reasonable.

| username: WalterWj | Original post link

Is it configured as 20 cores? Then, according to your read pool monitoring chart, it hasn’t reached the bottleneck.

| username: 胡杨树旁 | Original post link

That’s right, the configuration is set to 20. According to the monitoring, it’s about 1/3. There should be no pressure on the TiKV side.

| username: WalterWj | Original post link

Check the network and see if the monitoring ping latency is high.

| username: 会飞的土拨鼠 | Original post link

To check the pressure on the TiKV nodes, look at the Grafana dashboard for the corresponding timestamp. Were there many slow queries during that period?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.