Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: cluster_slow_query 表中的单位问题
I would like to ask whether the unit of this query_time is seconds (s) or milliseconds (ms).
Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: cluster_slow_query 表中的单位问题
I would like to ask whether the unit of this query_time is seconds (s) or milliseconds (ms).
All time-related fields in the slow query log are in “seconds”.
The default value of tidb_gc_life_time
is 10m, which means that data older than 10 minutes will be cleaned up. You can adjust this parameter according to your needs.
There is another question: Is the query_time retrieved the execution time of the SQL? I ran this SQL in the database using explain analyze and found that the time it took to run again was inconsistent with the query_time retrieved from the INFORMATION_SCHEMA.CLUSTER_SLOW_QUERY table.
Query time is the time consumed by SQL execution. The historical SQL might have different WHERE condition values compared to your current SQL, so the time might not be consistent. Execute the latest one and then check the corresponding SQL in CLUSTER_SLOW_QUERY to see the time.
The values in the CLUSTER_SLOW_QUERY table are actually obtained from the slow log files at the operating system level. Therefore, query_time is the execution time of the slow SQL. The different rerun times may be due to differences in the execution plan compared to before or varying system loads.
My scenario: I queried SQL statements with query_time > 1.5s from CLUSTER_SLOW_QUERY, and then I saw an SQL statement. After filtering it out, I used explain analyze in the database and found that the query time was around 1s at that time, but now it executes in 300ms, using the same execution plan. Is it possible that this SQL was executed more frequently at that time, causing higher pressure? Theoretically, the query_time in CLUSTER_SLOW_QUERY should be similar to the time in explain analyze, right? Can it be understood this way?
Yes, for example, during peak business periods, the database load is high, or the result set of the conditions used at that time is relatively large. If you look at it during peak periods, the time should be the same.