Is it feasible to set max-replicas to 1 before performing br restore to avoid network resource consumption caused by data synchronization between replicas?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 为避免副本间同步数据带来的网络资源消耗,做br恢复前将max-replicas设为1是否可行?

| username: 滴滴嗒嘀嗒

When BR is restoring, it should be writing data to the leader. The leader and follower need to synchronize data, and the leader will also switch. Will these two steps affect the efficiency of BR’s restoration?

| username: 这里介绍不了我 | Original post link

Is it advisable to set the replica count to 1 in a production environment?

| username: 滴滴嗒嘀嗒 | Original post link

Yes, set it to 1 before the recovery, and restore it back after the recovery is complete. Is this feasible?

| username: 这里介绍不了我 | Original post link

I don’t recommend doing this. What if it fails? How will you recover? Moreover, the higher version of BR actually has a significant speed improvement.

| username: jiayou64 | Original post link

In a production environment, there should be at least three replicas. For development and testing, do whatever you want.

| username: Kongdom | Original post link

:thinking: BR is physical backup and restore. Before the backup, there are 3 replicas, so after the restore, there should also be 3 replicas.

| username: 濱崎悟空 | Original post link

There is a risk.

| username: 小龙虾爱大龙虾 | Original post link

Prepare a copy :joy_cat:

| username: zhaokede | Original post link

Data security comes first.

| username: dba-kit | Original post link

The solution is feasible, but in reality, setting the replicas from 1 to 3 also requires time for the cluster to reach a production-ready state. And this time should be longer than BR (PD’s default scheduling is very slow, and if forced to speed up, the cluster will also be in a high-load state).

| username: YuchongXU | Original post link

Sure.

| username: Kongdom | Original post link

It doesn’t seem cost-effective, and it should take longer. After all, turning 3 into 1, backing up and restoring, and then turning 1 into 3 adds two more steps and increases the risk of data loss.

| username: 小龙虾爱大龙虾 | Original post link

But the data of the leader and follower are the same. :joy_cat:

| username: Kongdom | Original post link

:yum: Three replicas are to ensure high availability. Although the data is the same, the distribution is different. If the distribution were the same, high availability could not be guaranteed.

| username: TIDB-Learner | Original post link

No issues, everyone is happy. Frequently used for non-standard operations, purely for study and research, you can post the results.

| username: 滴滴嗒嘀嗒 | Original post link

Here, setting the replicas from 1 to 3 refers to after the BR recovery is completed. The main goal is to avoid affecting BR efficiency or even causing failure due to replica synchronization and leader switching during the BR recovery process. As for the time it takes for the cluster to reach a production-ready state after recovery, it hasn’t been considered for now.

| username: 滴滴嗒嘀嗒 | Original post link

Does the max-replicas configuration need a cluster restart to take effect after modification?

| username: Kongdom | Original post link

Generally, after modifying the configuration file parameters, a reload is required for the changes to take effect.