When tiup mirror is 6.1.2, unable to restart tidb 5.0 and 4.0 clusters, has anyone encountered this issue?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 当tiup mirror 是6.1.2的时候,无法去重启 tidb 5.0和4.0集群,请问大家遇到过这种问题嘛

| username: Raymond

When running tiup mirror show, it shows version 6.1.2. The version of tiup is 1.11.0, which comes with version 6.1.2. When trying to start the Prometheus component of v5 and v4 clusters, it fails. However, after setting tiup mirror set v5.3.3, starting the Prometheus component of v5 and v4 clusters succeeds. Previously, it was believed that tiup starting and stopping components should not be coupled with the mirror set or the cluster version. But after version 6.1, this logic seems to have changed. I am not sure what the purpose of this design change is.

| username: 我是咖啡哥 | Original post link

Could it be a version compatibility issue?

| username: h5n1 | Original post link

Please post the specific error message.

| username: WalterWj | Original post link

Try merging the sources with tiup.

| username: Raymond | Original post link

If you merge, it will definitely work, but I’m just curious why it won’t work without merging.

| username: Raymond | Original post link

It might be around 6.1 when version compatibility verification is available.

| username: WalterWj | Original post link

I think it’s very reasonable.

| username: Raymond | Original post link

The error is:
version 4.0.13 on linux/amd64 for component drainer not found: unknown version: check config failed

| username: srstack | Original post link

During reload, tiup will by default check the config. Due to some historical reasons, check config will check version numbers and other related information in the mirror. If the corresponding version is not found in the mirror, an error will be reported.

| username: Raymond | Original post link

Could you please explain this in detail?

| username: Raymond | Original post link

However, it’s just restarting one node, why is there a need for version compatibility verification? Based on actual usage, there wasn’t such verification before. Why was this verification added in version 6.1?

| username: WalterWj | Original post link

I think he’s right.

| username: Raymond | Original post link

I think this is a phenomenon. Restarting the Prometheus component of the TiDB 4.0.x cluster using the 5.3.3 mirror does not report an error, but restarting the Prometheus component of the TiDB 4.0.x cluster using the 6.1.2 mirror does report an error. This indicates that this mechanism has changed after version 6.1.

| username: WalterWj | Original post link

The versions of tiup are different, right? This thing depends on the logic of the tiup tool.

| username: WalterWj | Original post link

The logic of tiup reload should check the version and configuration. This is very reasonable.

For example:
If I patch a TiDB server node, and during reload, it finds that the version is different from my management version, it will directly overwrite it. Unless you used --overwrite when patching :thinking:.

Additionally, if you manually modify a node’s toml file, the reload will also revert the configuration of that node.

So, it is very reasonable to validate the version and configuration during reload.

| username: Raymond | Original post link

The logic of tiup reload should check the version and configuration. This is very reasonable. If it is for installation, deployment, upgrade, or patching, I think this is very reasonable. However, if it is just to restart one node and also do the configuration, is it really necessary?

| username: WalterWj | Original post link

:thinking:, so are you using restart or reload?

| username: Raymond | Original post link

The principle of reload is similar to restart, except it also distributes a configuration file.

| username: WalterWj | Original post link

The logic of reload is much more complicated.

| username: Raymond | Original post link

My error shows “init config failed.” After checking the code, it seems that the error occurs during the step where tiup generates the configuration.

When tiup reloads, it will init config.