Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 单次查询表数据太大超过1G触发了oom
I don’t want to modify the mem-quota-query parameter. Can I solve this by using Cursor Fetch and setting the FetchSize?
Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 单次查询表数据太大超过1G触发了oom
I don’t want to modify the mem-quota-query parameter. Can I solve this by using Cursor Fetch and setting the FetchSize?
It seems not feasible. For example, if your data is scattered across various TiKV nodes, it needs to be aggregated back to the TiDB node before applying the limit, because there might be conditions like sorting or grouping that require the data to be together to be effective…
Effective methods:
OOM is not caused by frontend blocking; it is due to TiDB caching data from TiKV and performing operator calculations with high concurrency, which leads to caching a large amount of data. Try adjusting tidb_distsql_scan_concurrency
to reduce the OOM issue. You can set it to 2 and see if it helps.
Your cursor fetch is already at the TiDB server level, and the mem-quota-query setting is also at the TiDB server level, so it cannot be done.
Why don’t you want to modify the mem-quota-query parameter? Either optimize the query or increase the threshold.
Isn’t the Cursor Fetch method used to fetch data in batches? My current query is very simple, it’s just data synchronization with DataX, but there’s too much data, and DataX can’t manually control the batching.
I think when using Cursor Fetch to stream read, the result set is already entirely in the TiDB server. It doesn’t mean it can change your SQL to achieve the limit offset effect.
So the solution is as kongdom mentioned above, either optimize the SQL or increase the mem quota threshold.
No, I have explained the reason very clearly and provided a solution. You can refer to it.
This sentence is incorrect. The result cached on the server side is a resultSet, which is a chunk. After reading it, continue to fetch. The cache is a result set with a max_chunk_size of 1024 rows.
Increasing the server memory will solve the problem. 1GB of memory is quite common.
The TiDB server indeed retrieves data from TiKV in a streaming manner, chunk by chunk. However, there is a question: for statements like “select xxx from xx limit n;”, the data retrieved from TiKV should be sorted in the memory of the TiDB server (assuming a full table scan without using an index in the execution plan). In this case, the TiDB server is likely to encounter an out-of-memory (OOM) issue, right?
This does not involve sorting. If limit n is used and the value of n is relatively small, the number of encapsulated tasks will be fewer and likely less than tidb_distsql_scan_concurrency, making it less prone to OOM. The maximum value is tidb_distsql_scan_concurrency=15 (default value), which means the maximum memory usage is 15*region_size. However, this situation is very rare because a region contains too many records. You would need to limit to hundreds of thousands or millions to possibly use 15 regions. Therefore, in the case of limit n, OOM is almost not a concern.
You might be referring to the case of select xxx from xx order by cola limit n, where cola is not an index column, right? In this case, the number of cop_tasks is basically the number of all regions in the table. However, for each cop_task, the topN operator is pushed down to TiKV for execution. This means that each cop_task (essentially equivalent to a region) retrieves the topN values and then returns them concurrently to the TiDB server. Therefore, the TiDB server caches at most limit N * tidb_distsql_scan_concurrency records, making the probability of OOM very small.
Further analysis: Suppose you need select cola, count(*) from xx group by cola limit n; and there is no index on cola. Will OOM occur in this case? For aggregation operations, if the data has low cardinality (high repetition), hash aggregation is generally performed at the TiKV layer, and then the data is taken to the TiDB layer for secondary aggregation, making the probability of OOM low. If the data has high cardinality, a large amount of data is placed in the TiDB layer for hash aggregation, and with the default configuration, there is a higher probability of OOM.
How to control large data volumes at the TiDB server layer to avoid OOM? First, enable the disk spill-related parameters to allow some operators to spill to disk. However, hash aggregation cannot spill to disk in concurrent situations. You need to set both tidb_hashagg_partial_concurrency and tidb_hashagg_final_concurrency to 1 to enable disk spilling. Without concurrency and with disk writes, the efficiency will be relatively slow, but it avoids OOM (this is theoretically avoiding OOM; TiDB’s memory tracking and evaluation need to be further improved in accuracy to completely avoid OOM).
In TiDB and TiKV, the request for a task follows the producer-consumer model. TiDB acts as the consumer, while TiKV is the producer. When TiDB consumes slowly (i.e., the client retrieves data slowly), it can lead to a backlog in TiDB’s buffer, causing TiDB to easily run out of memory (OOM). Conversely, when TiDB consumes quickly but the gRPC network is slow, it can theoretically lead to a backlog on the TiKV side, causing TiKV to easily run out of memory (OOM). For single-table full-table queries, if the TiDB client retrieves data quickly, it generally won’t be a big issue. There are two other points to note:
tidb_enable_rate_limit_action
parameter: This parameter controls the rate limiting action when the cop_task concurrent request queue’s cached data becomes too large, causing the memory usage to exceed the current session’s memory limit. It triggers flow control first. After each cop_task execution, it checks once. If the memory usage exceeds tidb_mem_quota_query
, the current concurrency is reduced by one. This brings a potential issue where the priority of flow control is higher than the cancel priority, leading to a situation where the process is not killed until flow control fails, even when memory usage has significantly exceeded the limit. This can cause the entire TiDB server to OOM when there are many concurrent sessions. Therefore, it is recommended to set tidb_enable_rate_limit_action
to OFF to allow the cancel action to be more timely.This means that TiDB cannot cache content anymore because too much has been fetched.