The new cluster's TiDB uses data from the old cluster. After overwriting the new cluster's data directory with the TiKV data directory from the old cluster, the TiKV nodes continuously report errors after startup, as shown in the following image

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 新集群使用的tidb的数据是旧的集群上的数据,直接将旧集群上的tikv的数据目录覆盖新集群的数据目录,启动之后,tikv节点一直报错如下图

| username: Chengf01

【TiDB Usage Environment】Production Environment / Testing / PoC
【TiDB Version】
【Reproduction Path】What operations were performed that led to the issue
【Encountered Issue: Issue Phenomenon and Impact】
【Resource Configuration】
【Attachments: Screenshots / Logs / Monitoring】

| username: 小龙虾爱大龙虾 | Original post link

This is not a conventional operation, and it is not recommended to play like this. The cluster ID should be recorded in TiKV, and it should also be recorded in PD’s etcd. If they don’t match, it can’t be added. Ensure that all TiKV nodes are data from the same cluster, and then rebuild PD according to PD Recover 使用文档 | PingCAP 文档中心. This should be able to bring it up.

| username: tidb菜鸟一只 | Original post link

Need to rebuild PD.

| username: h5n1 | Original post link

Search for reference articles.

| username: Miracle | Original post link

Would it be possible to bring over the data from the old PD as well?

| username: zxgaa | Original post link

It is not recommended to play like this with inconsistent metadata.

| username: Chengf01 | Original post link

Processed, thank you.

| username: 小龙虾爱大龙虾 | Original post link

Please share the final solution. I’m not sure which approach you used to resolve it, but it would be helpful for others to know.

| username: Chengf01 | Original post link

The above pd’s cluster-id issue has been resolved, now the error is as shown in the image above.

| username: h5n1 | Original post link

This is the same issue as the two raft engines being removed, right?

| username: Chengf01 | Original post link

I reinstalled a set and copied the old TiKV data over. The raft-engine on node 1 was indeed deleted. The md5 values of 0000000000000001.rewrite under raft-engine on nodes 2 and 3 were the same, so I copied the one from node 2 to node 1. Then, all three nodes reported errors as above and could not start.

| username: h5n1 | Original post link

Refer to this and prepare for the worst.

| username: Chengf01 | Original post link

I couldn’t find it, only this
image
After unzipping, it is tidb-exporter-v711-x86_64-unknown-linux-gnu

| username: 随缘天空 | Original post link

This definitely won’t work. The metadata of the new cluster is inconsistent with the old cluster. It is recommended to use data migration tools to migrate the data from the old cluster to the new cluster.

| username: dba远航 | Original post link

Caused by abnormal ID information

| username: Fly-bird | Original post link

If you can migrate all the PD data, it might work, but it’s a bit difficult. I suggest you create a new cluster and import the data.

| username: Kongdom | Original post link

It is recommended to replace the server through backup and restore, or by scaling up or down.

| username: 普罗米修斯 | Original post link

If the data volume is not large, full and incremental data synchronization is quite fast. There’s no need to go through so much trouble.

| username: Jellybean | Original post link

This operation is likely to be invalid and will probably contaminate the original data.

The data stored in the old cluster, including its cluster ID, Region location, and other metadata, is based on the old cluster. The new cluster will have new metadata information, so using the old TiKV data directory as the storage node for the new cluster is likely not feasible.

Not only will the metadata conflict, but it will also be unable to properly parse the TiKV Region data. Check if there are any backups available and use them to restore the cluster. If there are no backups, be prepared for the worst-case scenario after these operations.

| username: TIDB-Learner | Original post link

Take it as a warning!!