A Question About Placement Rules

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 一个关于 placement rule 的问题

| username: Smityz

In the example shown with 1 voter, 1 learner, and 2 followers, if the leader in the raft group goes down, and the two followers can only vote but not run for leader, does that mean the leader must be manually designated?

| username: Billmay表妹 | Original post link

When the Leader in a Raft Group goes down, the two Follower nodes will start a new round of Leader election. In the Raft protocol, each node has a random election timeout. When the timeout is reached, the node will start a new round of Leader election. During the election process, each node will send a RequestVote RPC request to other nodes, asking them to vote for itself to become the Leader. If a node receives votes from other nodes and gains the support of the majority, it will become the new Leader.

Therefore, even if the Leader in the Raft Group goes down, as long as there are still Follower nodes alive, they can conduct a new round of Leader election without manually specifying a Leader. Of course, if you want to manually specify a Leader, that is also possible. In TiDB, you can use the PD-CTL tool to manually designate a node as the Leader.

| username: Smityz | Original post link

What you mentioned here should be the normal situation, but what I want to ask is what the election process is like after adding the placement rule. The follower specified in the placement rule should not be elected as the leader, right?

| username: tidb菜鸟一只 | Original post link

After adding the placement rule, if the node specified as the leader region becomes abnormal, you need to manually modify the placement rule related to the original table and designate the original follower region as the leader region.

| username: h5n1 | Original post link

The placement rule only schedules the corresponding replicas to the specified TiKV through labels, mainly determining the location. As long as the number of replicas is satisfied, it does not affect the Raft election.

| username: Smityz | Original post link

May I ask if the placement rule is applicable to the rawkv scenario?

$ bash p.sh config placement-rules show
[
  {
    "group_id": "pd",
    "id": "1",
    "start_key": "",
    "end_key": "",
    "role": "voter",
    "count": 1,
    "label_constraints": [
      {
        "key": "kind",
        "op": "in",
        "values": [
          "ssd"
        ]
      }
    ],
    "location_labels": [
      "host"
    ],
    "version": 3,
    "create_timestamp": 1682048754
  },
  {
    "group_id": "pd",
    "id": "2",
    "start_key": "",
    "end_key": "",
    "role": "follower",
    "count": 2,
    "label_constraints": [
      {
        "key": "kind",
        "op": "in",
        "values": [
          "hdd"
        ]
      }
    ],
    "location_labels": [
      "host"
    ],
    "version": 3,
    "create_timestamp": 1682048754
  }
]

This is my placement-rule configuration. My idea is that all leaders are on SSDs and only followers are on HDDs. However, after setting it up, there was no migration of replicas. Is it because I used the wrong method, or does rawKV not support this setup?
Cluster version: v5.3.0

| username: zzzzzz | Original post link

Using this might be more intuitive?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.