Replica Issues

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 副本问题

| username: TiDBer_IqLxmVWq

Three host nodes, with the following parameters:
replication.enable-placement-rules TRUE
replication.max-replicas 3
The sample information of the table TIKV_REGION_PEERS is as follows:

380657 380659 4 0 0
380657 380658 1 0 1
380657 380660 5 0 0
As I understand it, REGION: 380657 has three replicas (380659, 380658, 380660), so there are four copies of the data.
Is that correct? How should I understand this phenomenon?
| username: WalterWj | Original post link

The region ID is the group name, and the peer is the member name. Three replicas refer to the members.

| username: tidb菜鸟一只 | Original post link

What the person above said is correct. In TiDB, each table is divided into multiple Regions based on the Region’s partitioning strategy. Each Region is then replicated to multiple Peers using the Raft Group method. Each Peer has a unique Peer ID, representing a sub-partition within that Region, responsible for handling read and write requests for the data within that sub-partition. A Region can be managed by multiple Peers, with one Peer elected as the Leader and the other Peers acting as Followers participating in the data replication process. When the Leader fails, a Follower can become the new Leader, ensuring the availability and data consistency of the entire Region.

| username: TiDBer_IqLxmVWq | Original post link


| username: TiDBer_IqLxmVWq | Original post link

Thank you!

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.