Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 3万数据过滤删除很慢,无法获取锁,有啥好方法吗?
The image is not available for translation. Please provide the text content directly.
Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 3万数据过滤删除很慢,无法获取锁,有啥好方法吗?
The image is not available for translation. Please provide the text content directly.
3w data is not much. Use limit to delete it in segments.
30,000 records aren’t that many. Do we really need to go to such lengths?
It’s best to check the monitoring to see if it’s a read-write conflict or a write-write conflict.
Here is the script I used to clean up a large amount of data in TiDB. This allows for the deletion of millions of records without affecting the normal use of TiDB. I hope it can help you.
date1=`date --date "7 days ago" +"%Y-%m-%d"`
delete_db_sql="delete from mysql_table where create_date_time<'$date1' limit 10000"
i=0
while ((++i)); do
a=`/bin/mysql -uroot -p123456 -A mysql_database -h127.0.0.1 --comments -e "${delete_db_sql}" -vvv | grep "Query OK" | awk '{print $3}'`
if(($a<1)); then
break 1
fi
sleep 1
printf "%-4d" $((i))
done
Refer to this teacher’s idea and prioritize checking for lock conflict issues. This problem is largely likely caused by lock conflicts leading to lock acquisition timeouts. Additionally, you can view the current pessimistic lock situation in TiKV through DATA_LOCK_WAITS | PingCAP 文档中心.