Errors Appear Frequently in TiKV Logs When Dropping a Large Number of Partitions from Partitioned Tables

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 开发大量drop分区表的分区,tikv日志出现大量报错

| username: zhanggame1

[Test Environment for TiDB] Testing
[TiDB Version] 7.1
[Reproduction Path] Operations performed that led to the issue
[Encountered Issue: Problem Phenomenon and Impact]
A large number of drop partition table operations were performed in development, resulting in a large number of errors in the TiKV logs.

| username: 像风一样的男子 | Original post link

The log suggests you perform an analyze table.

| username: zhanggame1 | Original post link

The suggestion is related to TiDB, mainly wanting to ask what the TiKV error means.

| username: 有猫万事足 | Original post link

Take a look at this.

| username: tidb菜鸟一只 | Original post link

Try manually compacting TiKV.

| username: 像风一样的男子 | Original post link

The split process first checks whether to split, calculates the split key, then applies for a region ID from PD, and constructs new region information based on the relevant ID. Check if the region status is normal. If both the region leader and replica statuses are normal, check if the region’s mvcc.num_rows is high. If num_rows is also high, check if the current cluster’s GC configuration is set for a long duration.

| username: zhanggame1 | Original post link

The GC is configured for 48 hours, which might be too long.

| username: knull | Original post link

Marking this to see how things are going now. Can it be resolved?

| username: zhanggame1 | Original post link

Although there were errors, they did not have any impact, so in the end, they were not addressed.