What to Do If You Forget Which Machine is the TiDB Control Machine

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: TiDB中控机忘记是哪台了怎么办

| username: Ming

【TiDB Usage Environment】Production Environment
【TiDB Version】
【Reproduction Path】
【Encountered Problem: Problem Phenomenon and Impact】
If the cluster was deployed using a control machine and you later forget which machine is the control machine, how can you find the location of the control machine?
【Resource Configuration】
【Attachments: Screenshots/Logs/Monitoring】

| username: xfworld | Original post link

SSH into each server and execute tiup to see which one responds, and that’s it…

| username: Ming | Original post link

What if it is a standalone server?

| username: Ming | Original post link

Can it be seen through netstat or ps?

| username: xfworld | Original post link

By checking all the nodes through the dashboard, one of them is not included, which is the TiUP control machine.

| username: 裤衩儿飞上天 | Original post link

A simple method: check if your SSH client is set to automatically log. If it is, you can check the logs.

| username: 我是咖啡哥 | Original post link

ls -lth /home/tidb/.tiup

If there is data, it is usually there.

| username: tidb菜鸟一只 | Original post link

Check the logs in /home/tidb/.tiup/logs on each host.

| username: ffeenn | Original post link

The simplest and most effective method: open all nodes with Xshell, right-click to send the session to all, and enter history | grep tiup.

| username: Ming | Original post link

The .tiup directory needs to be on the control machine or the machine where tiup is installed, right?

| username: Ming | Original post link

Is this directory only available if tiup is installed?

| username: Ming | Original post link

It seems that the dashboard does not display the separately deployed tiup address.

| username: gary | Original post link

Using monitoring to eliminate possibilities :smile:

| username: ohammer | Original post link

Log into each node and use history | grep tiup to check for any instances where tiup might have been executed.

| username: Ming | Original post link

In this case, it should be very useful to deploy the control center on a server with a small number of servers or on a specific node in the cluster.

| username: tidb菜鸟一只 | Original post link

That’s right, and your operation records and cluster configuration information will also be in this directory.

| username: Raymond | Original post link

You can check /var/log/secure to see which machine’s IP has connected via SSH. That machine might be the one with tiup.

| username: xingzhenxiang | Original post link

Look at this directory /home/tidb/.tiup and back it up to other TiDB machines so that the tiup command can be executed on other machines as well.

| username: Hacker_xUwtuKxa | Original post link

Just run the command tiup cluster list.

| username: 人如其名 | Original post link

In production, due to security issues, multiple clusters managed by tiup may be divided according to network areas, internal and external networks, production testing, etc. Moreover, the management and maintenance personnel may not necessarily be dedicated TiDB personnel; they might use TiDB but do not know where tiup is. However, alerts generally only provide specific IP information or cluster name information, making it difficult for “non-cluster deployment personnel” to take action. Therefore, it is recommended to deploy scripts on each tiup control machine and add them to crontab. Using commands like tiup push and exec, the tiup information itself can be pushed (updated) to the OS of each node, facilitating the reverse lookup of the control machine.