Considerations for Upgrading TiDB Data Migration from 6.4.0 to 7.1.0

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: Tidb Data Migration6.4.0升级到7.1.0有哪些注意事项

| username: ks_ops_ms

【TiDB Usage Environment】Testing
【TiDB Version】
【Reproduction Path】What operations were performed when the issue occurred
【Encountered Issue: Issue Phenomenon and Impact】
【Resource Configuration】
【Attachments: Screenshots/Logs/Monitoring】

| username: ks_ops_ms | Original post link

Can the DM upgrade be done by directly switching the image?

| username: 像风一样的男子 | Original post link

You can use the tiup dm upgrade command to upgrade the cluster. tiup dm upgrade prod-cluster ${version}``
You can refer to the official documentation for detailed instructions:

| username: ks_ops_ms | Original post link

Can I use tidb-operator to upgrade?

| username: Billmay表妹 | Original post link

When upgrading, the entire cluster must be upgraded together. Upgrading a single component separately may cause issues.

| username: 像风一样的男子 | Original post link

The DM cluster and the TiDB cluster are two separate clusters and can be upgraded independently.

| username: 像风一样的男子 | Original post link

The documentation is incomplete. I can’t find the method to upgrade the DM cluster deployed using Kubernetes.

| username: ks_ops_ms | Original post link

I also haven’t found a way to upgrade DM in k8s, but DM seems to be just a tool. Is it possible to pause the task first, then copy the task files, change the image to the new version, and then restart the task after bringing it up? Not sure if this is feasible.

| username: 像风一样的男子 | Original post link

It’s equivalent to reinstalling the DM cluster. The data needs to be resynchronized.

| username: ks_ops_ms | Original post link

The day before yesterday, when upgrading the TiDB cluster on k8s, I found that the documentation mentioned directly changing the version number in the CRD when upgrading components. So I feel that this might also be a way to upgrade.

| username: 像风一样的男子 | Original post link

I haven’t deployed TiDB using k8s, so I’m not familiar with this area. Sorry.

| username: ks_ops_ms | Original post link

Hahaha, let’s all study together and see if any experts can answer our questions.

| username: 有猫万事足 | Original post link

If the upstream does not use GTID and DM uses relay log, it is suspected that during the upgrade, the folder used by DM for relay log may become the same name. This could cause interference between synchronization tasks, resulting in some synchronization tasks being unable to continue and requiring task reconstruction.

After I upgraded to 7.3.0,

tiup dm exec {cluster_name} --command='cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index' -R dm-worker

Executing the above command, the result is as follows:

Outputs of cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index on xxxx:
stdout:
97506354-f2c4-11ed-a355-525400eac8ec.000001

Outputs of cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index on xxxx:
stdout:
96fc48ab-ea51-11ed-8476-5254000699c3.000001

Outputs of cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index on xxxx:
stdout:
96fc48ab-ea51-11ed-8476-5254000699c3.000001

Outputs of cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index on xxxx:
stdout:
96fc48ab-ea51-11ed-8476-5254000699c3.000001

Outputs of cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index on xxxx:
stdout:
96fc48ab-ea51-11ed-8476-5254000699c3.000001

Outputs of cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index on xxxx:
stdout:
96fc48ab-ea51-11ed-8476-5254000699c3.000001

Outputs of cat /tidb_dm/deploy/dm-worker-8262/relay-dir/server-uuid.index on xxxx:
stdout:
96fc48ab-ea51-11ed-8476-5254000699c3.000001

You can see many 96fc48ab-ea51-11ed-8476-5254000699c3. If relay log is not used or the upstream has GTID, the impact should be minimal.

| username: Fly-bird | Original post link

The upgrade shouldn’t have much impact, right?

| username: ks_ops_ms | Original post link

There shouldn’t be any impact if you directly change the image version in the YAML.

| username: ks_ops_ms | Original post link

I just directly modified the image version in the CRD YAML of DM after making a backup. After restarting, all the original configurations were still there, and there were no errors. It should be okay to switch versions directly after making a backup.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.