Strange Slow Query

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 奇怪的慢查

| username: TiDB_C罗

[TiDB Usage Environment] Production Environment
[TiDB Version]
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration]
Table Structure

Query Statement

Execution Plan

The index is used, and manual execution is also very fast. Why is it a slow query? The coprocessor consumption in the dashboard takes so long. Could you please advise on which direction to analyze?

| username: TiDBer_zWsmzAL8 | Original post link

It is recommended to check the data distribution of accountid, how many entries there are, and how large it is.

| username: TiDB_C罗 | Original post link

         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 41881722
 Avg_row_length: 104
    Data_length: 4355699088
Max_data_length: 0
   Index_length: 670107552
      Data_free: 0
 Auto_increment: 42113927
    Create_time: 2023-07-20 02:50:58
    Update_time: NULL
     Check_time: NULL
      Collation: utf8mb4_unicode_ci
       Checksum:
 Create_options:
| username: Kongdom | Original post link

Is it caused by the order by?

| username: zhanggame1 | Original post link

The dashboard shows it takes only 0.5 seconds, which isn’t too slow, right?

| username: 有猫万事足 | Original post link

Take a look at the content on the page about the coprocessor.

| username: TiDBer_zWsmzAL8 | Original post link

Is the result only one line?

| username: Jasper | Original post link

  1. How long does it take to execute manually?
  2. Please provide the execution result of explain analyze for review.
| username: tidb菜鸟一只 | Original post link

The high waiting time and low execution time for the Coprocessor should indicate that the TiKV Coprocessor in the entire system is under significant pressure, causing the Coprocessor threads to be busy and leading to waiting.

| username: TiDB_C罗 | Original post link

The account_id does not return much data.

| username: WalterWj | Original post link

The volume is not large, but sometimes when the memory is not hit, scanning the disk may result in tail latency.

| username: zhanggame1 | Original post link

It is possible that the TiKV load is high. Check the overall situation of TiKV CPU and IO.

| username: TiDB_C罗 | Original post link

Manual execution is very fast

1 row in set (0.00 sec)

Execution plan
time:524.5µs, loops:2, cop_task: {num: 1, max: 482.9µs, proc_keys: 2, rpc_num: 1, rpc_time: 465.9µs, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{time:0s, loops:1}, scan_detail: {total_process_keys: 2, total_process_keys_size: 110, total_keys: 3, get_snapshot_time: 10.4µs, rocksdb: {key_skipped_count: 2, block: {cache_hit_count: 11}}}
time:934.3µs, loops:2, cop_task: {num: 2, max: 442.1µs, min: 420µs, avg: 431µs, p95: 442.1µs, max_proc_keys: 1, p95_proc_keys: 1, rpc_num: 2, rpc_time: 841.4µs, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:0s, min:0s, avg: 0s, p80:0s, p95:0s, iters:2, tasks:2}, scan_detail: {total_process_keys: 2, total_process_keys_size: 309, total_keys: 2, get_snapshot_time: 17.8µs, rocksdb: {block: {cache_hit_count: 20}}}

| username: TiDB_C罗 | Original post link

Which Grafana metric can verify this point?

| username: TiDB_C罗 | Original post link

When the IO utilization is high, the time periods do not correspond.

| username: WalterWj | Original post link

Monitoring TiKV has a hit rate, but it can’t definitively prove that this SQL is the one affecting it.

| username: caiyfc | Original post link

Post the execution plan of that slow SQL from the dashboard.

| username: TiDB_C罗 | Original post link

This can match up.

| username: TiDB_C罗 | Original post link

I think the problem is that the tidb-server process is not running. You can check if the process is running with the ps command. If it is not running, you can start it with the systemctl start tidb command.

| username: caiyfc | Original post link

The most important information is on the right side.