How to Balance After One Node Fails in a Three-Node Cluster

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 三节点挂掉一节点后如何均衡

| username: Kongdom

[TiDB Usage Environment] Production Environment
[TiDB Version] v5.4.3
[Encountered Issue]
TiKV nodes 201, 202, 203 have unbalanced leader replicas, but the difference is not significant. Assuming each node has 10K leader replicas.
When node 202 goes down,
How are the replicas balanced? Are the non-leader replicas on 201 and 203 elected respectively, resulting in each node having 15K leader replicas?
Currently testing,
Reading tables with corresponding leader replicas on 201 works fine.
Reading tables with corresponding leader replicas on 202 results in an error: region is not unavailable.

| username: 近墨者zyl | Original post link

After 202 goes down, a re-election will occur, and the scheduling of the region by PD is done automatically. The client will naturally report a backoff error when it cannot find the region of 202, then it will get the latest region information from PD again and cache it to the TiDB server.

| username: Kongdom | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.