Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: TiKV日志条目shrink cache by tick的含义
[TiDB Usage Environment] Testing
[TiDB Version] v6.5.0
[Encountered Problem: Phenomenon and Impact]
One of the three TiKV nodes continuously prints the following logs, not sure what the impact is:
[2023/02/06 15:37:31.206 +08:00] [INFO] [sst_importer.rs:442] [“shrink cache by tick”] [“retain size”=0] [“shrink size”=0]
During this period, the following log also appears intermittently:
[2023/02/06 15:53:54.085 +08:00] [INFO] [util.rs:598] [“connecting to PD endpoint”] [endpoints=
http://${ip}:2379]
The info logs don’t seem to indicate a major issue.
It feels like it’s caused by empty regions. 
I hope so, I’ll keep observing.
Warnings are generally not to be managed, and info can be ignored…
But this feels a bit strange. The logs on the TiDB side show that it has been performing GC continuously, and the GC logs also look somewhat abnormal.
From 08:38:14 to 08:48:04, it should be a complete GC, which took ten minutes. However, I always feel that it has been resolving locks during this period. During this time, TiDB had no load at all, and the CPU usage of the three TiKV nodes was around 100%.
The green one is the TiDB node responsible for GC.
This is the QPS of TiKV.
You might want to look into the GC principles, but it seems fine. TiKV is only using 1 core.
It shouldn’t affect usage, right?
It indeed doesn’t have much impact, but it feels like when it was on version 6.1.0, the GC didn’t seem to take up so much, which is a bit daunting.
The difference between the new and old versions should be the compact filter. You can look into it.