Is there a good method to filter and delete 30,000 records slowly and unable to acquire a lock?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 3万数据过滤删除很慢,无法获取锁,有啥好方法吗?

| username: Running

The image is not available for translation. Please provide the text content directly.

| username: tidb菜鸟一只 | Original post link

Are there any other processes updating this table?

| username: TiDBer_jYQINSnf | Original post link

3w data is not much. Use limit to delete it in segments.

| username: DBRE | Original post link

Delete in segments according to the primary key ID range.

| username: Running | Original post link

No, I feel that deleting in TiDB is a big pitfall.

| username: Running | Original post link

30,000 records aren’t that many. Do we really need to go to such lengths?

| username: tidb菜鸟一只 | Original post link

Please post the execution plan for us to take a look.

| username: WalterWj | Original post link

It’s best to check the monitoring to see if it’s a read-write conflict or a write-write conflict.

| username: xingzhenxiang | Original post link

Here is the script I used to clean up a large amount of data in TiDB. This allows for the deletion of millions of records without affecting the normal use of TiDB. I hope it can help you.

date1=`date --date "7 days ago" +"%Y-%m-%d"`
delete_db_sql="delete from mysql_table where create_date_time<'$date1' limit 10000"

i=0

while ((++i)); do 
a=`/bin/mysql -uroot -p123456 -A mysql_database -h127.0.0.1 --comments -e "${delete_db_sql}" -vvv | grep "Query OK" | awk '{print $3}'`    
        if(($a<1)); then
            break 1 
        fi
        sleep 1
        printf "%-4d" $((i))

done
| username: Running | Original post link

Impressive, even deleting has become so troublesome.

| username: Lucien-卢西恩 | Original post link

Refer to this teacher’s idea and prioritize checking for lock conflict issues. This problem is largely likely caused by lock conflicts leading to lock acquisition timeouts. Additionally, you can view the current pessimistic lock situation in TiKV through DATA_LOCK_WAITS | PingCAP 文档中心.