Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: tikv已经下线,但是仍然有更新日志和数据

tikv-20163 has already been taken offline, but there are still data updates in the installation directory.
Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: tikv已经下线,但是仍然有更新日志和数据
tikv-20163 has already been taken offline, but there are still data updates in the installation directory.
Display the cluster status to see if the TiKV data has been scheduled and completed.
When scaling in a cluster, certain components will not immediately stop services and delete data. Instead, you need to wait for the data scheduling to complete and then manually execute the tiup cluster prune
command to clean up.
Manually execute the tiup cluster prune
command to clean up.
Was it decommissioned through scaling down? Check the cluster status with display to see if the decommissioning is complete.
It should be the internal information of the system. If it doesn’t work, just disconnect the network.
The TiKV process hasn’t been closed, right? The scaling down operation probably hasn’t been completed.
The offline process of TiKV goes through several states: up → offline → tombstone. When the TiKV status reaches tombstone, it means the offline process is complete.
Check if the node status is tombstone, and then perform cleanup with tiup cluster prune
.
Just delete it when it’s offline, why let it keep running?
Check the display and check the status, it might be scaling down.
Has it been confirmed to go offline? Check the display to see if it is still migrating regions.
Before deleting the node, it was running. Carefully compare it with the TiKV offline schedule mentioned above. Check which step you are at.
Use the command tiup cluster display <cluster-name>
to check the status of the TiKV:
If it shows “Offline” status, it means that Region migration is in progress.
If it shows “Tombstone” status, it means that the node is completely offline.
At this point, you can use the command tiup cluster prune <cluster-name>
to completely remove the offline node.
Try using “tiup cluster display” to check. The offline process might not be completed yet.
It is recommended to wait for a while, as going offline takes time. If the issue persists after a long time, then you need to check the error logs.
First, use tiup cluster display
to confirm that the node is in tombstone mode. Suspect that it hasn’t been completely decommissioned yet.