Issues Regarding Placement Rules in SQL Replica Placement Strategy

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 关于Placement Rules in SQL副本放置策略问题

| username: terry0219

[Test/Poc Environment] TiDB
[TiDB Version] 7.5.1
The current label design is [“cloud”, “region”, “host”], with TiKV labels as { cloud: “aliyun”, region: “cn-beijing-a”, host: “tikv-1” }, { cloud: “aliyun”, region: “cn-beijing-b”, host: “tikv-2” }, { cloud: “aliyun”, region: “cn-beijing4-1”, host: “tikv-3” }, and so on, with a 5-replica configuration.
If I want to place the Leader node in either cn-beijing-a or cn-beijing-b, with 1 or 2 replicas in cn-beijing-a, 1 or 2 replicas in cn-beijing-b, and 1 replica in cn-beijing4-1, is it possible to achieve this?

I have now created this policy according to the documentation: CREATE PLACEMENT POLICY testp2 LEADER_CONSTRAINTS=“[+region=cn-beijing-a]” FOLLOWER_CONSTRAINTS=‘{“+region=cn-beijing-a”: 1, “+region=cn-beijing-b”: 2, “+region=cn-beijing4-1”: 1}’; but this leader can only be placed in cn-beijing-a. I want the leader to be distributed across two regions.

| username: tidb菜鸟一只 | Original post link

PRIMARY_REGION is the region where the Leader is distributed and only one can be specified.

| username: terry0219 | Original post link

I thought of a compromise method, not sure if it can be used in production. CREATE PLACEMENT POLICY testp1 CONSTRAINTS=‘{“+region=cn-beijing-a”:2, “+region=cn-beijing-b”: 2, “+region=cn-beijing4-1”: 1}’;, and then use scheduler add evict-leader-scheduler store_id to evict, preventing the leader from being scheduled to cn-beijing4-1.

| username: TiDBer_yyy | Original post link

Not sure if it can solve the problem

By configuring a larger election timeout tick for TiKV nodes through raftstore.raft-min-election-timeout-ticks and raftstore.raft-max-election-timeout-ticks, the probability of the Region on that node becoming the Leader can be significantly reduced. However, in disaster scenarios, if some TiKV nodes go down and the Raft logs of other surviving TiKV nodes are lagging, only the Region on the TiKV node with the larger election timeout tick configuration can become the Leader. Since the Region on this TiKV node needs to wait at least the time set by raftstore.raft-min-election-timeout-ticks before initiating an election, it is advisable to avoid setting this configuration value too high to prevent affecting the cluster’s availability in such scenarios.

| username: TiDBer_yyy | Original post link

Just add the election timeout tick for electing the leader in the store configuration file of cn-beijing4-1.

| username: terry0219 | Original post link

That’s also a way, I’ll give it a try.

| username: terry0219 | Original post link

The effect is not good. After configuring these two parameters, the number of leaders for the node is still increasing when simulating data insertion.

| username: terry0219 | Original post link

This configuration should only be used during a failure election.

| username: TiDBer_yyy | Original post link

This is more flexible, the principle is the same.

| username: 林夕一指 | Original post link

Why not just set an additional label isleader :thinking:
LEADER_CONSTRAINTS=“[+isleader=1]”

| username: terry0219 | Original post link

How should FOLLOWER_CONSTRAINTS be written to ensure an even distribution of 2:2:1?

| username: terry0219 | Original post link

Later, I tried to operate as follows:

  1. CREATE PLACEMENT POLICY testp1 CONSTRAINTS=‘{“+region=cn-beijing-a”:2, “+region=cn-beijing-b”: 2, “+region=cn-beijing4-1”: 1}’; Place replicas in a 2:2:1 manner first.
  2. Adjust the leader_weight of the machine in region=cn-beijing4-1 to 0.
| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.