In the 2PL phase with 3 TiKV nodes, is there a primary key on the leader node, or does each node have its own primary key?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 2PL阶段,3个节点的TIKV,是在leader节点存在一把primary key,还是每个节点上各存在一个primary key?

| username: Lystorm

During the prewrite phase, the first row of the write operation is taken as the primary row, and the remaining rows are secondary rows. However, TiKV is a cluster, so within the cluster, does this primary row exist on every node, or is there only one primary row in the entire cluster?

| username: 箱子NvN | Original post link

The first row of data is written as the primary row in the leader region and replicated to other follower regions. For three TiKV nodes, if there are no special settings, it defaults to three replicas, meaning each TiKV node has a copy.

| username: Lystorm | Original post link

I organized my thoughts and I think there should be only one main lock for a raft group. If both the leader and follower nodes have the main lock, then if any one node fails:
If it’s a follower node, when this node recovers, its own main lock hasn’t been released yet, while the main locks of the other nodes have been released. It will keep waiting for its own main lock to be released until it times out.
If it’s the leader node, then the entire cluster should enter a partial failure state, possibly triggering a re-election, or it might recover in time and continue with the write tasks and other branch logic.
I’m not sure if this understanding is correct.

| username: Lystorm | Original post link

The default value of tidb_gc_life_time is 10m, which means that the data deleted within 10 minutes can be recovered. If you want to recover data from a longer time ago, you need to adjust this parameter before deleting the data.

| username: system | Original post link

This topic will be automatically closed 60 days after the last reply. No new replies are allowed.