Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: TiDB中控机忘记是哪台了怎么办
【TiDB Usage Environment】Production Environment
【TiDB Version】
【Reproduction Path】
【Encountered Problem: Problem Phenomenon and Impact】
If the cluster was deployed using a control machine and you later forget which machine is the control machine, how can you find the location of the control machine?
【Resource Configuration】
【Attachments: Screenshots/Logs/Monitoring】
SSH into each server and execute tiup
to see which one responds, and that’s it…
What if it is a standalone server?
Can it be seen through netstat or ps?
By checking all the nodes through the dashboard, one of them is not included, which is the TiUP control machine.
A simple method: check if your SSH client is set to automatically log. If it is, you can check the logs.
ls -lth /home/tidb/.tiup
If there is data, it is usually there.
Check the logs in /home/tidb/.tiup/logs on each host.
The simplest and most effective method: open all nodes with Xshell, right-click to send the session to all, and enter history | grep tiup
.
The .tiup directory needs to be on the control machine or the machine where tiup is installed, right?
Is this directory only available if tiup is installed?
It seems that the dashboard does not display the separately deployed tiup address.
Using monitoring to eliminate possibilities 
Log into each node and use history | grep tiup
to check for any instances where tiup might have been executed.
In this case, it should be very useful to deploy the control center on a server with a small number of servers or on a specific node in the cluster.
That’s right, and your operation records and cluster configuration information will also be in this directory.
You can check /var/log/secure to see which machine’s IP has connected via SSH. That machine might be the one with tiup.
Look at this directory /home/tidb/.tiup and back it up to other TiDB machines so that the tiup command can be executed on other machines as well.
Just run the command tiup cluster list
.
In production, due to security issues, multiple clusters managed by tiup may be divided according to network areas, internal and external networks, production testing, etc. Moreover, the management and maintenance personnel may not necessarily be dedicated TiDB personnel; they might use TiDB but do not know where tiup is. However, alerts generally only provide specific IP information or cluster name information, making it difficult for “non-cluster deployment personnel” to take action. Therefore, it is recommended to deploy scripts on each tiup control machine and add them to crontab. Using commands like tiup push and exec, the tiup information itself can be pushed (updated) to the OS of each node, facilitating the reverse lookup of the control machine.