Leader Scheduling Issue

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: Leader调度问题

| username: zzsh

[TiDB Usage Environment]
Testing

[TiDB Version]
v5.0.3

[Environment]
Two-city, three-center architecture:
Two nodes per center
Two centers in the same city have region leaders
Two nodes in the remote center execute the following command:
scheduler add evict-leader-scheduler <store_id>
Set to 5 replicas

All node servers have the same configuration
Basically using default settings.

[Problem]
One node in the same city center had a disk IO issue, causing it to be unavailable.
As a result, replicas and leaders were supplemented on the nodes in the remote center, causing business access to be extremely slow.

| username: zzsh | Original post link

Why is a leader generated on a remote node? Is the migration command not effective?
How can this problem be solved?

| username: DBAER | Original post link

Refer to this
You need to set this
config set label-property reject-leader dc 3

No leader in the backup center

| username: GreenGuan | Original post link

This is strongly coupled with the label settings. Could you please share the label configuration?

| username: zzsh | Original post link

The function of this command is not permanent, right?

| username: zzsh | Original post link

No label set?

| username: DBAER | Original post link

I think it’s also to avoid the normal leader of the node. Can you check if some PD rules and store IDs correspond?

| username: zzsh | Original post link

The result shows the leader migration, and the store ID also corresponds.

| username: 小龙虾爱大龙虾 | Original post link

There might be an issue with your configuration. In TiDB v5, under a two-city three-center architecture, you should set the labels appropriately, for example:

tikv_servers:
  - host: 10.63.10.30
    config:
      server.labels: { az: "1", replication zone: "1", rack: "1", host: "30" }
  - host: 10.63.10.31
    config:
      server.labels: { az: "1", replication zone: "2", rack: "2", host: "31" }
  - host: 10.63.10.32
    config:
      server.labels: { az: "2", replication zone: "3", rack: "3", host: "32" }
  - host: 10.63.10.33
    config:
      server.labels: { az: "2", replication zone: "4", rack: "4", host: "33" }
  - host: 10.63.10.34
    config:
      server.labels: { az: "3", replication zone: "5", rack: "5", host: "34" }

To prevent the remote center from generating a leader, you should use the config set label-property reject-leader az 3 command. For detailed configuration, refer to 双区域多 AZ 部署 TiDB | PingCAP 文档中心

Also, use the isolation-level to restrict the minimum isolation level requirements.

To analyze your current issue, please provide the PD configuration by running config show all.