Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: 如何删除offline状态tikv

[TiDB Usage Environment] Production Environment
[TiDB Version] v5.3.1 3 replicas
[Reproduction Path] What operations were performed when the issue occurred
Accidentally forced to scale in one KV node using the command: tiup scale-in tidb -N 172.16.61.21:20160 --force -y
Then scaled out one KV node again, and the current cluster status is normal.
[Encountered Issue: Problem Phenomenon and Impact]
Using tiup to check cluster information shows no scaled-in nodes. Using pd-ctl stores to get scaled-in node information shows the status as offline, with a region count of 120.
The data directory no longer exists after scaling in. What method can be used to delete this node?
{
“store”: {
“id”: 4,
“address”: “172.16.61.21:20160”,
“state”: 1,
“version”: “5.3.1”,
“status_address”: “172.16.61.21:20180”,
“git_hash”: “ecc2549ed63fc21f1f6e11f1b85f19c2465234a5”,
“start_timestamp”: 1690773632,
“deploy_path”: “/opt/tidb-deploy/tikv-20160/bin”,
“last_heartbeat”: 1690773644351448028,
“state_name”: “Offline”
},
“status”: {
“capacity”: “0B”,
“available”: “0B”,
“used_size”: “0B”,
“leader_count”: 0,
“leader_weight”: 1,
“leader_score”: 0,
“leader_size”: 0,
“region_count”: 120,
“region_weight”: 1,
“region_score”: 1048,
“region_size”: 1048,
“slow_score”: 0,
“start_ts”: “2023-07-31T11:20:32+08:00”,
“last_heartbeat_ts”: “2023-07-31T11:20:44.351448028+08:00”,
“uptime”: “12.351448028s”
}
}