Limit 1 Takes a Long Time

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: limit 1 耗时长

| username: jackerzhou

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version] v6.5
[Reproduction Path] Operations performed that led to the issue
[Encountered Issue: Problem Phenomenon and Impact]
Slow execution time for limit 1 query
[Resource Configuration]
[Attachments: Screenshots / Logs / Monitoring]
Execution Plan:


Concurrency Parameters:

| username: tidb菜鸟一只 | Original post link

select * from mysql.tidb to check if GC has progressed. There is too much data that hasn’t been GC’d. Or did you just perform a large batch deletion on the table?

| username: jackerzhou | Original post link

From the delete_skipped_count data, the GC is still normal, and the default GC interval is 10 minutes.

| username: h5n1 | Original post link

Check the GC-related monitoring of tikv-detail. If GC is normal, you can proceed as follows:

  1. Perform a compact on each TiKV during business off-peak hours:
    tikv-ctl --host 10.172.65.156:20161 compact -c write and default -d kv --bottommost force
    
  2. Rebuild the table.
  3. Upgrade to the latest patch version.
| username: jackerzhou | Original post link

First, this “compact” is understood as data compression at the TiKV layer, but what is the reason for this in TiKV? Could you elaborate on it?

| username: h5n1 | Original post link

GC at the TiKV layer has cleaned it up, but for RocksDB, it is only marked for deletion. These marked items are only processed during compaction.

| username: zhanggame1 | Original post link

There should be a lot of data inserted during deletion, and the number of regions obviously increases during deletion.

| username: Kongdom | Original post link

The “explain analyze” executes a more realistic execution plan.
Does this table have a primary key? It feels like it should return in seconds.

| username: h5n1 | Original post link

In TiDB, delete also inserts a piece of data with a delete mark. This is at the TiDB layer.

| username: jackerzhou | Original post link

The ID is an auto-increment ID and also the primary key.

| username: Kongdom | Original post link

I tried it locally, and it seems to be the situation mentioned by the second floor. The number of keys scanned on my side is very few.

tikv_task:{time:1ms, loops:1}, scan_detail: {total_process_keys: 1, total_process_keys_size: 299, total_keys: 2, get_snapshot_time: 71.9µs, rocksdb: {key_skipped_count: 1, block: {cache_hit_count: 14}}}

| username: zhanggame1 | Original post link

What is the main effect of compact, and what are its advantages relative to the database itself?

| username: h5n1 | Original post link

| username: zhanggame1 | Original post link

Thank you, I’ve learned something. The official documentation only explains the usage, but doesn’t clarify the details.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.