After deleting the .tiup directory and reinstalling tiup, the original cluster cannot be viewed using tiup cluster list. How to add the cluster back to tiup management?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 删除 .tiup 目录后,重新安装tiup,使用 tiup cluster list 无法查看原有集群,如何将集群添加到tiup管理

| username: EDG-给我冲

【TiDB Usage Environment】Production Environment
【TiDB Version】v6.5.0
【Encountered Problem: Phenomenon and Impact】The .tiup directory was cleaned up and deleted. After reinstalling the new version of tiup, tiup cluster list can no longer display the previously deployed clusters.
【Resource Configuration】
【Attachments: Screenshots/Logs/Monitoring】

| username: 啦啦啦啦啦 | Original post link

All the metadata is stored here. If you delete it, it definitely won’t work. Refer to this link to restore it:

| username: EDG-给我冲 | Original post link

Thank you, I will give it a try.

| username: Anna | Original post link

Confirm the purpose of this recovery, which is to restore the metadata in .tiup, as it is the foundation for managing the cluster. With this metadata, the new tiup will continue to maintain the previous cluster.

Recovery Steps

  1. Manually write the final cluster topo file.
  2. Batch move the instance-level bin/{instance}-server files, explanation can be found in [2].
  3. Follow the tiup deployment steps to perform the deploy operation, explanation can be found in [3].

[2] Since the deployment uses the released tiup, it is necessary to overwrite the instance-level binary files. However, this does not affect the running services. During the replacement process, a Text file busy issue may occur, so you need to manually move all binary files to old or other names (mv tidb-server tidb-server_old). This step must be completed thoroughly to avoid issues during deployment. Otherwise, steps 2 and 3 need to be repeated.

[3] Since the current tiup does not have the cluster metadata, the deploy will not encounter directory port conflicts. The deploy will download the specified version of the binary files and overwrite the original cluster. (Since this is an unconventional deploy, overwriting is considered a normal operation, and upgrade will back up the bin directory.)

Precautions

  1. The version specified during deploy must be correct and the same version.
  2. All nodes’ binary files need to be moved.
  3. Ensure the completeness of the topology file, consistent with the final cluster nodes. Pay attention to the parameters such as deploy-dir/data_dir/port/status_port.
| username: redgame | Original post link

Manually write the final cluster topo file,
Need to batch move the instance-level bin/{instance}-server files,
Follow the tiup deployment steps to perform the deploy operation.

| username: srstack | Original post link

You can regularly back up the TiUP metadata

tiup cluster meta
Usage:
  meta [command]

Available Commands:
  backup      backup topology and other information of cluster
  restore     restore topology and other information of cluster

Flags:
  -h, --help   help for meta

Global Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "meta help [command]" for more information about a command.
| username: EDG-给我冲 | Original post link

The final solution is to create the path /root/.tiup/storage/cluster/clusters/es_tidb based on your own cluster. Here, es_tidb is the name of the cluster. The meta.yaml and ssh directory should be stored below. The meta.yaml is the cluster topology, and the ssh directory needs to store the private key, ensuring that the TiDB user can SSH to other nodes from the control machine using this private key. The structure of the meta.yaml file is as follows.
image

| username: zhanggame1 | Original post link

Learn a bit.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.