Failed to Start TiDB 4000 Service with Three Nodes and Single Replica

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 三节点单副本tidb4000服务启动失败

| username: September

【TiDB Usage Environment】Local environment, used for storing some historical log data
【TiDB Version】6.5.0
【Reproduction Path】
The cluster consists of 3 KVs, 1 PD, and 1 TiDB.
The 3 KVs are 192.168.30.30, 192.168.30.31, and 192.168.30.32, while PD and TiDB are 192.168.30.33.
A couple of days ago, the company experienced a power outage, and the server at 192.168.30.30 entered rescue mode. Therefore, we decided to abandon this server and directly killed it by executing tiup cluster scale-in tidb-test --node 192.168.30.30:20160 --force. This operation was successful, but executing tiup cluster start tidb-test -R tidb still failed. After executing tiup cluster stop tidb-test and then tiup cluster start tidb-test, it still didn’t work. If I am determined to abandon the data on the 30 server and only want to retain the data on the remaining two servers to restore the 4000 service, what should I do?
【Encountered Problem: Symptoms and Impact】
【Resource Configuration】Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
【Attachments: Screenshots/Logs/Monitoring】



| username: xfworld | Original post link

The offline operation hasn’t been completed, right?
The logs show that node 30 is still receiving and processing requests.

At the moment, it seems that you can only consider unsafe recovery.
Reference documentation:


If the cluster processing is not completed and other irreversible operations have been performed, it is recommended that you reinstall.

| username: 我是咖啡哥 | Original post link

Single replica? If one TiKV is down, won’t data be lost?

| username: tidb菜鸟一只 | Original post link

First, expand a TiKV node, and then decommission the node that reported the error.