Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: TiUP持久化数据完全丢失,如何恢复?
【TiDB Usage Environment】Production
【TiDB Version】4.x
【Encountered Problem】The machine where TiUP is located has completely crashed and cannot manage the cluster
【Reproduction Path】None
【Problem Phenomenon and Impact】
There is a 4.x environment here with several components such as pd, tidb, tikv, tiflash, etc., distributed across more than 10 machines. Normally, its deployment and maintenance are managed through the central control machine TiUP. However, the current problem is that the machine where the central control machine is located has failed and is gone, and there is no backup of the persistent resources related to tiup; resulting in the inability to perform maintenance, migration, and other operations on this cluster. I would like to ask if it is possible to downgrade to manual cluster maintenance in this situation (I have not seen any official documentation on manual maintenance), or if there is a way to rebuild TiUP.
Find a new control machine to rebuild the environment.
-
Install the tiup toolkit.
-
Configure the information of the original cluster, topology.yaml (you need to enter the information of the original cluster nodes).
-
tiup cluster deploy tidb-xxx ./topology.yaml
-
tiup cluster display tidb-xxx
Check the status of the cluster…
It is recommended to set up a scheduled task to regularly back up the tiup environment information.
Is the operation of tiup cluster deploy tidb-xxx ./topology.yaml
strictly idempotent? The cluster information can exist, but it cannot be fully confirmed. I am afraid that the operation might overwrite the original resources if an error occurs.
You need to check the parameters of the PD, TiDB, and TiKV instances on each node one by one to avoid parameter overrides and inconsistencies with previous settings. This operation is idempotent.
Manually tore up the configuration file, the cluster has been restored.
Well, it would be great if the tiup tool could be more seamlessly integrated with git, similar to Ansible, where all the data that needs to be persisted is stored on git, following the so-called gitops approach.
Databases definitely should not be exposed to the external network… It would be great if it could automatically send a copy to all machines in the cluster, as the probability of all machines in the cluster failing is very low.
Key configuration files still need to be backed up.
In what configuration changes of the cluster is it necessary to back up the .tiup directory? Topology structure? Cluster configuration file?
Any modification commands must be backed up. Tiup now has a backup command, but you still need to write your own backup scripts and such. I am planning to put the underlying tiup on the cloud drive.
Are you referring to GitHub on the external network? I was talking about private deployment of Git servers like GitLab on the corporate intranet.
That works. My idea is that if it can be done without intrusion, TiDB can automatically distribute backups to other nodes similar to TiKV’s multiple replicas.
Similar to command-line tools like k8s kubectl and ceph, it can be installed on all nodes of the cluster, with persistent data stored within the cluster itself. If it has import and export functions, it would be perfect.
This part should not be confidential, you can manually install tiup;
Then, manually supplement the meta.yaml (complete it according to memory), and then reload. The IP, port, configuration, and label here must be correct;
After reloading, it should be able to recover. The corresponding information can be found through the existing machine using pd-ctl get storeInfo. If the parameters can connect to the database, they can be found in cluster_config.