Upgrade TiDB to 7.1.0 in Production Environment

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 生产环境tidb升级至7.1.0

| username: ks_ops_ms

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version] 6.4.0
[Reproduction Path] Operations performed that led to the issue
[Encountered Issue: Problem Phenomenon and Impact] TiDB is deployed in k8s in the production environment. The testing environment has successfully upgraded to 7.1.0 through a rolling upgrade. However, there was no rollback plan during the testing environment upgrade, so there is a fear of unexpected incidents during the upgrade process. To be cautious, we want to use a migration upgrade method by migrating data from the old version to the new version of TiDB or migrating data from the old version to the same version and then upgrading the new cluster to the new version. However, during the setup of the new cluster, it was found that the CRD of tidbcluster cannot be modified arbitrarily, as it will directly affect the normal operation of the existing cluster. Therefore, I am asking the experts in the field if there is a way to set up an additional temporary cluster with different specifications but the same version as the old version for testing without affecting the existing cluster.
[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]

| username: Miracle | Original post link

I remember that it seems impossible to create two CRDs in one cluster…

| username: ks_ops_ms | Original post link

Yes, if you change the original CRD configuration, it will directly affect the existing TiDB used by the normal business.

| username: Miracle | Original post link

You can try stopping all the services first, then make a copy of the mounted data directory, and then proceed with the rolling upgrade. If the rolling upgrade fails, stop the services and mount the original data directory, so you will have the original data.

| username: Fly-bird | Original post link

You are using k8s, just start another cluster and make a copy of the data.