TiDB Auto-Increment Primary Key Setting AUTO_ID_CACHE=1 and Actual Table Duplication

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb 自增主键 设置 AUTO_ID_CACHE=1 和实际的表 重复

| username: Hacker_ZcrkjsVg

[TiDB Usage Environment] Production Environment
[TiDB Version] 6.5.4
Business Table

CREATE TABLE `reportstatelog_info` (
`log_id` int(11) NOT NULL AUTO_INCREMENT,
`machixxnereport_id` int(11) DEFAULT NULL,
`shopxx_id` int(11) DEFAULT NULL,
`saleplacxxeno` varchar(20) COLLATE utf8_general_ci DEFAULT NULL,
`report_state` int(11) DEFAULT NULL,
`create_time` datetime DEFAULT NULL,
PRIMARY KEY (`log_id`)
/*T![clustered_index] CLUSTERED */
) ENGINE = InnoDB DEFAULT CHARSET = utf8 COLLATE = utf8_general_ci
/*T![auto_id_cache] AUTO_ID_CACHE=1 */

There was no problem before, but today there is a primary key conflict.

select max(log_id) from reportstatelog_info
40826765

However, the value of the auto-increment ID during insert is 40823765, causing the issue. During the problem period, the TiDB node did not crash, and the issue cannot be reproduced. The specific reason is unknown.

| username: 随缘天空 | Original post link

The log_id has already exceeded the maximum value, and it is no longer possible to store values in the database. There is a primary key conflict.

| username: xfworld | Original post link

Check if this is the cause.

| username: TIDB-Learner | Original post link

Restart the TiDB server during non-business hours.

| username: zhanggame1 | Original post link

Discontinuity will cause jumps, but it won’t repeat. I’ve encountered this issue before, with many jumps back.

| username: zhanggame1 | Original post link

I don’t know why, but I’ve encountered this twice, and the solution is simple.

Method 1: Manually insert a row with an ID greater than the maximum value in the table, and it works fine.

Method 2: Use ALTER TABLE tablename FORCE AUTO_INCREMENT=1234;.

| username: zhanggame1 | Original post link

Sorry, I can’t access external links. Please provide the text you need translated.

| username: DBAER | Original post link

Mark, study it.

| username: zhang_2023 | Original post link

Can ensure increment.

| username: Hacker_ZcrkjsVg | Original post link

Thank you. I have the same issue as you. Our version is 6.5, and your version is 7.5, which indicates that upgrading won’t solve the problem. Setting auto_id_cache=1 is supposed to solve the issue of skipped numbers, but now the problem is even more severe. Primary key conflicts are causing data to be unable to be written.

| username: zhanggame1 | Original post link

The issue is indeed very serious, I will consult with the vendor.

| username: zhanggame1 | Original post link

I have consulted, and the bug autoid client reset connection operation is not concurrency-safe · Issue #50519 · pingcap/tidb · GitHub has been fixed in versions 6.5.8 and 7.5.1. You need to upgrade your version.

| username: 友利奈绪 | Original post link

I also don’t do it continuously, it will jump, but it won’t repeat.

| username: xfworld | Original post link

Your situation is normal.

| username: RenlySir | Original post link

Fixed the issue where an error occurred in the allocation of auto-increment IDs due to concurrent conflicts when using AUTO_ID_CACHE=1 for auto-increment columns #50519 @tiancaiamao

This issue was fixed in version 6.5.8

| username: zhanggame1 | Original post link

Setting AUTO_ID_CACHE=1 when creating a table will prevent ID jumps under normal circumstances, but it significantly impacts performance in older versions.

| username: 健康的腰间盘 | Original post link

Change the range of ID allocation?

| username: 呢莫不爱吃鱼 | Original post link

Worth learning

| username: aytrack | Original post link

Has the cluster ever performed a backup and restore?
Execute show table reportstatelog_info next_row_id; to check the result of NEXT_GLOBAL_ROW_ID.

| username: Billmay表妹 | Original post link

Hello, could you check if the issue still exists after upgrading to 6.5.8? After reporting it to the R&D team, they found that they couldn’t reproduce your issue for now.