Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 执行计划疑问

From this information, can we tell if the bottleneck is with TiDB or TiKV?
Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 执行计划疑问
From this information, can we tell if the bottleneck is with TiDB or TiKV?
Operator pushdown, this is TiKV, the module for reading data and computation.
SQL consumes a lot of resources, optimizing SQL will solve the problem.
If you need to add hardware resources, should you add CPU or memory for TiKV?
First, take a look at the SQL. If the SQL is not good, nothing else will be useful.
Cop tasks are statistics on the TiKV side, possibly due to a large amount of scanned data: either the query itself requires a lot of data or there are many MVCC versions to scan. You can check the total_keys information in the execution plan. It could also be that the unified thread pool is quite busy, leading to longer times.
During the month-end, during peak concurrency, there are too many SQL queries, and it’s also difficult to modify them.
The image you provided is not visible. Please provide the text you need translated.
Is the amount of data read from TiKV too large?
Can increasing memory and cache help alleviate this?
I see that the CPU usage of tidb_server is also very high. I’m not sure if the bottleneck is with tidb_server or TiKV.
There are quite a few MVCC versions. Check the tikv-detail thread CPU and unify read pool monitoring when executing SQL.
Assuming SQL cannot be optimized, how can we solve the problem of slow queries? You can add resources.
Increase operator parallelism, and if TiKV resources are insufficient, add more TiKV machines.
Yesterday, I added a node, but it takes a long time to synchronize data, so the problem cannot be solved immediately.
Scaling a distributed database is definitely done horizontally, and it is expanded when there are bottlenecks.
Optimizing SQL? Are you sure you need to scan that much data…