Issue with max-retry-count Parameter Control

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: max-retry-count参数控制问题

| username: 逍遥_猫

The official documentation says “maximum retry count for a single statement.” Is this retry count based on txnStartTS or calculated based on the error SQL DIGEST?

| username: tidb菜鸟一只 | Original post link

It must be for a specific SQL, not a type of SQL…

| username: pursue | Original post link

You can check out this information, it might be helpful to you:

TiDB Pessimistic Transaction Mode | PingCAP Documentation Center

TiDB Lock Conflict Handling | PingCAP Documentation Center

| username: 逍遥_猫 | Original post link

After the error, the connection is terminated, right?

| username: Jellybean | Original post link

After the error, this SQL stopped executing. As for the connection issue, typically the client accesses the database by establishing a MySQL protocol TCP long connection. If the client does not actively disconnect or does not encounter a system timeout, the connection should still be there.

| username: 逍遥_猫 | Original post link

System timeout refers to the timeout set by TiDB?

| username: 逍遥_猫 | Original post link

There is an update statement that sets different values for a certain field and keeps retrying, with log write conflicts generating a large number of logs. The connection values are also not the same. Is this something that can only be handled from the application side? I haven’t found a way to control it from the DB side.

| username: Jellybean | Original post link

System timeout refers to databases like TiDB/MySQL disconnecting idle connections that have not been used for an extended period from the server to avoid resource wastage or other issues. This time is usually set to 7 days by default, but users can adjust it to other values.

| username: Jellybean | Original post link

Different transactions updating the same data record and encountering a write conflict indicates a transaction conflict. The database can do little to address this issue; adjusting various parameters can at most alleviate the symptoms. To fundamentally solve the problem, it needs to be handled from the application side by avoiding simultaneous operations on the same data as much as possible.

| username: dba远航 | Original post link

This is to avoid resource wastage caused by excessive retries.

| username: tidb菜鸟一只 | Original post link

Yes, such lock conflicts caused by transactions can only be resolved at the business level. The database can only ensure data consistency, not business logic.