How to handle a TiDB cluster node change when the cluster cannot be stopped or destroyed

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb集群某个节点变了,无法停止集群或者销毁集群,怎么处理

| username: Lock4U

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version]
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Symptoms and Impact]
A certain node in the TiDB cluster has changed, and the cluster cannot be stopped or destroyed. How to handle this?
[Resource Configuration]
[Attachments: Screenshots / Logs / Monitoring]

| username: Lock4U | Original post link

The image you uploaded is not visible. Please provide the text you need translated.

| username: Lock4U | Original post link

Stopping or destroying reports an error, indicating that it cannot connect to the node with the changed IP.

| username: Kongdom | Original post link

Refer to this case

| username: 胡杨树旁 | Original post link

Forcibly destroy the cluster?

| username: tidb菜鸟一只 | Original post link

Check if the information recorded in the meta.yaml file is still outdated. If so, update the corresponding information and then destroy the cluster.

| username: 考试没答案 | Original post link

During the installation process of the entire cluster, the directories tidb-deploy and tidb-data are involved. For nodes where the IP has already changed and need to be deleted and cleaned up, can physical folder deletion be performed? Can these two folders be deleted from the operating system?

| username: WalterWj | Original post link

Scale down with --force

| username: huanglao2002 | Original post link

  1. Since the system reports “[SSH connection refused]”, you can first check the SSH status of the server itself. If the server status is not correct, you need to fix the SSH service on this server first.
  2. If the server is fine, you can test SSH login on the TiUP tool node.
| username: xingzhenxiang | Original post link

tiup cluster scale-in --force tidb-clustername --node number, and then scale out again