The rewriteBatchedStatements Parameter Causes No Queries Using Plan Cache OPS Hits

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: rewriteBatchedStatements参数导致无Queries Using Plan Cache OPS 命中

| username: TiDBer_rrt4FOef

[TiDB Usage Environment] Test
[TiDB Version]
V6.1.2
[Encountered Problem]
The application scenario involves batch deleting data and then batch writing data using Jdbc’s batch processing. During testing on TiDB, it was found that if rewriteBatchedStatements=true is not used, the write speed is very slow. However, in the monitoring, the [Queries Using Plan Cache OPS] item can hit the cache, and the [CPS By Type] item is StmtExecute, with almost no Query.

If rewriteBatchedStatements=true is used, the write speed can be significantly improved, but the [Queries Using Plan Cache OPS] item in the monitoring is always 0, unable to hit the cache, and the [CPS By Type] item is all Query.

Is it because the rewritten insert…values statement cannot be cached?

[Phenomenon Update]
I suspect that the batch size is too large, causing the rewritten SQL to exceed prepStmtCacheSqlLimit, resulting in no cache hits. When the batch size is reduced to 10 SQLs per batch and rerun, it can hit [Queries Using Plan Cache OPS]. However, when the batch size is increased to 100, no matter how large prepStmtCacheSqlLimit is set, it cannot hit [Queries Using Plan Cache OPS].
Is there a limit on the length of batch statements on the TiDB server side?

[Test 2]
The specific JDBC connection string is as follows:
jdbc:mysql://11.139.35.194:4000/busiData?useServerPrepStmts=true&cachePrepStmts=true&prepStmtCacheSize=1000&prepStmtCacheSqlLimit=2147483647&rewriteBatchedStatements=true&useConfigs=maxPerformance&allowMultiQueries=true
prepStmtCacheSqlLimit is already Integer.MAX_VALUE, and it has been set to a larger value 20480000000000000000000, but it still has no effect.

Database Related Parameters
tidb_prepared_plan_cache_size: 100
tidb_ignore_prepared_cache_close_stmt: ON

Ran two tests with batchSize=10 and 100 respectively.
This test scenario involves batch inserting data into two tables, one with 56 fields and the other with 146 fields. It is speculated that the batch insert statement will be very large, but it is unclear where to find the rewritten statement.
However, the client did not receive any max_allowed_packet related errors, and all data was successfully inserted.
When batchSize=10, monitoring chart:


When batchSize=100, monitoring chart:

Could not find the Plan Cache Memory Usage monitoring chart in TiDB → Executor monitoring.
When batchSize=100, Memory Usage monitoring shows memory rising quickly.

What could be the issue? How should I troubleshoot it? Thanks for the help!

| username: gcworkerishungry | Original post link

Looking at the parameters, there is a limit on the number of SQLs, but SQLs of MB length are supported, and I have seen SQLs of more than 10 MB, possibly up to 64 MB. So check the length of your insert statements. There are other ways that can affect cache hits, you can refer to here:

SQL length limit reference: 请问tidb 最大sql语句长度有没有限制? - #17,来自 疾风之狼 - TiDB 的问答社区

| username: tidb菜鸟一只 | Original post link

Due to a client bug, when performing batch updates, if you need to configure rewriteBatchedStatements = true and useServerPrepStmts = true, it is recommended to also configure the allowMultiQueries = true parameter to avoid this bug.

It is suggested to modify this parameter accordingly and give it a try.

| username: TiDBer_rrt4FOef | Original post link

Thank you for the reply. The allowMultiQueries parameter has already been set to true. The specific JDBC connection string is as follows:
jdbc:mysql://11.139.35.194:4000/busiData?useServerPrepStmts=true&cachePrepStmts=true&prepStmtCacheSize=1000&prepStmtCacheSqlLimit=2147483647&rewriteBatchedStatements=true&useConfigs=maxPerformance&allowMultiQueries=true

| username: TiDBer_rrt4FOef | Original post link

Thank you for your help. The image attachment kept failing when replying, so I placed some monitoring images in the main body of the post during testing.

I suspect that the batch insert statements might be very large, but I don’t know where to find the batch statements rewritten by JDBC.
How can I determine if it has exceeded the memory limit of the plan cache?
The version is 6.1.2, and I couldn’t find “Plan Cache Memory Usage” in the monitoring. It should be available in later versions.

| username: gcworkerishungry | Original post link

I wonder if it is possible to find statement analysis or topsql in the SQL statements within the dashboard?

| username: TiDBer_rrt4FOef | Original post link

I found the original insert statement in the [SQL statement analysis], as shown in the picture.

| username: Min_Chen | Original post link

Hello,
batchSize=10 can be cached, but batchSize=100 cannot be cached.
It is likely that the SQL is too large to fit into memory. The plan cache reserves very little memory for each statement, and if it is too large to fit, it will not use the plan cache.

| username: TiDBer_rrt4FOef | Original post link

Thank you for the response.
Are there any related parameters that can set this value?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.