Why is the time of the parent operator smaller than the time of the child operator in the explain analyze results?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: explain analyze 结果中父算子的time比子算子的time要小,为什么?

| username: HTAP萌新

[TiDB Usage Environment] Test
[TiDB Version]
[Reproduction Path] When doing explain analyze, it was found that the execution time of some sub-operators Selection/IndexRangeScan [cop(tikv)] in the plan is longer than the time of the parent operator (IndexReader)? Why is this? Most other execution plans satisfy the condition that the parent operator time >= sub-operator time.

“query”: “SELECT production_year FROM imdb.title USE INDEX(idx1) WHERE production_year>=1912 AND production_year<=1912 AND kind_id>=0 AND kind_id<=6 AND season_nr>=2 AND season_nr<=59”,
“plan”: [
“id\testRows\testCost\tactRows\ttask\taccess object\texecution info\toperator info\tmemory\tdisk”,
“Projection_4\t0.00\t85423.07\t0\troot\t\ttime:115.7µs, loops:1, Concurrency:OFF\timdb.title.production_year, row_size: 8\t1.11 KB\tN/A”,
“└─IndexReader_7\t0.00\t85417.07\t0\troot\t\ttime:112.8µs, loops:1, cop_task: {num: 1, max: 7.62ms, proc_keys: 7536, tot_proc: 8ms, rpc_num: 1, rpc_time: 7.59ms, copr_cache: disabled}\tindex:Selection_6, row_size: 48\t232 Bytes\tN/A”,
" └─Selection_6\t0.00\t85397.07\t0\tcop[tikv]\t\ttikv_task:{time:8ms, loops:12}, scan_detail: {total_process_keys: 7536, total_process_keys_size: 482304, total_keys: 7537, rocksdb: {delete_skipped_count: 0, key_skipped_count: 7536, block: {cache_hit_count: 12, read_count: 0, read_byte: 0 Bytes}}}\tge(imdb.title.season_nr, 2), le(imdb.title.season_nr, 59), row_size: 48\tN/A\tN/A",
" └─IndexRangeScan_5\t7536.00\t62789.07\t7536\tcop[tikv]\ttable:title, index:idx1(production_year, kind_id, season_nr)\ttikv_task:{time:8ms, loops:12}\trange:[1912 0,1912 6], keep order:false, row_size: 48\tN/A\tN/A"
]

[Encountered Problem: Problem Phenomenon and Impact]
[Resource Configuration] Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachment: Screenshot/Log/Monitoring]

| username: tidb菜鸟一只 | Original post link

Where does it show that the execution time of the child operator is longer than that of the parent operator?

| username: HTAP萌新 | Original post link

The selection here takes 8ms, while the upper-level index_reader only takes 115.7µs.

| username: WalterWj | Original post link

TiDB 执行计划概览 | PingCAP 文档中心 Because some of the times are the sum of multiple nodes.

| username: HTAP萌新 | Original post link

However, in my case, there is only one cop_task corresponding to one tikv_task, which took 8ms to execute, while the time for indexReader is significantly lower than that time.

| username: 人如其名 | Original post link

I feel this is a statistical flaw. The operator has three steps during execution (classic volcano model): open, next, close. The execution time for IndexReader is recorded during the next step. However, the request to TiKV for data starts during the open phase, which is not included in the IndexReader statistics. If the execution time for large result sets is mostly during the next step, it is not very noticeable. But for fast executions like yours, it is easy to see that the execution time of the sub-operator might be larger than that of the IndexReader.

Look at my large result set; it seems fine.

mysql> explain analyze select L_SHIPDATE from lineitem where L_RECEIPTDATE < now() and L_LINENUMBER > 1;
+----------------------------+-------------+----------+-----------+---------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+----------+------+
| id                         | estRows     | actRows  | task      | access object                                                       | execution info                                                                                                                                                                                                                                                                                                                                                       | operator info                             | memory   | disk |
+----------------------------+-------------+----------+-----------+---------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+----------+------+
| Projection_4               | 19973507.81 | 20029587 | root      |                                                                     | time:3.6s, loops:20923, Concurrency:5                                                                                                                                                                                                                                                                                                                                | tpch.lineitem.l_shipdate                  | 193.3 KB | N/A  |
| └─IndexReader_7            | 19973507.81 | 20029587 | root      |                                                                     | time:3.47s, loops:20923, cop_task: {num: 712, max: 248.7ms, min: 548.2µs, avg: 62.2ms, p95: 152.3ms, max_proc_keys: 67552, p95_proc_keys: 67552, tot_proc: 31.4s, tot_wait: 6.28s, rpc_num: 712, rpc_time: 44.3s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}                                                                                               | index:Selection_6                         | 4.74 MB  | N/A  |
|   └─Selection_6            | 19973507.81 | 20029587 | cop[tikv] |                                                                     | tikv_task:{proc max:193ms, min:0s, avg: 41.9ms, p80:75ms, p95:116ms, iters:28903, tasks:712}, scan_detail: {total_process_keys: 26706944, total_process_keys_size: 2243383296, total_keys: 26707656, get_snapshot_time: 311.8ms, rocksdb: {key_skipped_count: 26706944, block: {cache_hit_count: 3541, read_count: 24926, read_byte: 147.6 MB, read_time: 156.8ms}}} | gt(tpch.lineitem.l_linenumber, 1)         | N/A      | N/A  |
|     └─IndexRangeScan_5     | 24966884.76 | 26706944 | cop[tikv] | table:lineitem, index:idx2(L_RECEIPTDATE, L_SHIPDATE, L_LINENUMBER) | tikv_task:{proc max:193ms, min:0s, avg: 40.9ms, p80:72ms, p95:115ms, iters:28903, tasks:712}                                                                                                                                                                                                                                                                         | range:[-inf,2023-05-28], keep order:false | N/A      | N/A  |
+----------------------------+-------------+----------+-----------+---------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+----------+------+
4 rows in set (3.61 sec)
| username: HTAP萌新 | Original post link

Thank you, indeed some things seem fine while others have significant issues.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.