KV Node Data Distribution Imbalance During BR Restore

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: BR还原时kv节点数据分布不均

| username: xiexin

[TiDB Usage Environment] Production Environment
[TiDB Version] v4.0.10
[Reproduction Path] Restore command: br restore full --pd=“xx.xxx.xxx.xx:2379” --storage=“s3://${backup_file}” --s3.endpoint=“http://xxxx” --s3.region=“xxx” --send-credentials-to-tikv=true --log-file=/tmp/restore_20230525.log
[Encountered Problem: Phenomenon and Impact] Using BR for physical backup to restore the cluster, the data distribution on the restored servers is uneven, and some nodes have 100% disk usage, causing the restore to fail.
Source cluster version: v4.0.9
Target cluster version: v4.0.10
br version: v4.0.10
First restore: The number of servers in the target cluster is the same as the source cluster. During the restore process, some machines’ disks were filled up, causing the restore to fail.
Second restore: Added 2 TiKV nodes to the target cluster. During the restore process, some machines’ disks were filled up, causing the restore to fail.
[Resource Configuration]
Source cluster v4.0.9 topology: 8 servers (3T disk), 3 PD, 3 TiDB, 8 TiKV, 3 Pump, 1 Prometheus
Target cluster v4.0.10 topology: 10 servers (3T disk), 3 PD, 3 TiDB, 10 TiKV, 6 Pump, 1 Prometheus
[Attachments: Screenshots/Logs/Monitoring]
Source cluster 8 servers’ data disk usage:
image

Target cluster 10 servers’ data disk usage (second restore):
image

| username: Billmay表妹 | Original post link

You can first check the region distribution through monitoring to see if it is balanced: (cluster-overview - TiKV - region)

| username: Billmay表妹 | Original post link

Refer to this:
https://docs.pingcap.com/zh/tidb/stable/grafana-overview-dashboard#tikv
https://docs.pingcap.com/zh/tidb/stable/dashboard-key-visualizer#region

This is the theory:
https://docs.pingcap.com/zh/tidb/stable/tidb-storage#region

| username: Billmay表妹 | Original post link

By the way, try to keep the versions consistent.

Source cluster version v4.0.9
Target cluster version v4.0.10
BR version v4.0.10

For example, can the source cluster version be upgraded to 4.0.10?

| username: xiexin | Original post link

The data migration was done for version upgrade, and the region sizes are similar. Therefore, the nodes with larger data disk usage definitely have more regions. The current issue is why there is such a large disparity in region distribution during the restoration process. Other nodes still have space, but data is not being written to these idle nodes. The original cluster’s disk usage rate of over 80% is due to additional pump and Prometheus nodes, which cause higher disk usage on some servers, but the region distribution is still relatively even.

| username: zhanggame1 | Original post link

It feels like a bug, I haven’t encountered it before.

| username: xiexin | Original post link

Current solution:
Expand the new cluster with 4 TiKV nodes, totaling 14 servers, and use BR to restore. This time the restoration was successful.
Then shrink the nodes again…
The final success information in the restore log: [“Full restore Success summary: total restore files: 341222, total success: 341222, total failed: 0, total take(Full restore time): 17h38m51.219922709s, total take(real time): 8h56m53.812512961s, total kv: 122530638751, total size(MB): 15156886.41, avg speed(MB/s): 238.57”] [“split region”=4h0m43.659493435s] [“restore checksum”=8h56m1.114043982s] [“restore ranges”=228924] [Size=5563286697035]

| username: huhaifeng | Original post link

Sorry, a curious off-topic question:
I’ve seen many major version upgrades that indeed involve migration upgrades;
But why are you using migration upgrades for this minor version upgrade? Is there something particularly incompatible with 4.0.10?

| username: xiexin | Original post link

In fact, it is a data center relocation requirement. During the relocation process, a minor upgrade was performed on the target TiDB cluster version.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.