Ask If You Don't Understand: TiDB Cluster Migration Across Data Centers (Cloud Regions)

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 不懂就问:tidb集群跨DC(云 Region)迁移

| username: TiDBer_yyy

[TiDB Usage Environment] Production Environment
[TiDB Version] V4.0.16, v5.0.1, v5.0.4, v5.0.5

[Encountered Issues: Problem Phenomenon and Impact]
Background:
Planning to switch cloud regions, which is equivalent to migrating the TiDB cluster across data centers (DC), with a network bandwidth of 40M between DCs. At the same time, all services and databases will rely on this network migration.

Question 1:
Can the above versions use the scaling solution to complete the migration? What are the risks?

Question 2:
If setting up a “ticdc” synchronized TiDB primary-backup cluster, will the TSO between these two clusters be consistent?
The solution is as follows:

| username: TiDBer_yyy | Original post link

Question 2:
https://docs.pingcap.com/zh/tidb/stable/upstream-downstream-diff
TiKV 6.4.0 version maintains a ts-map with consistent snapshots between upstream and downstream.

It can be seen that the TSO between the TiDB primary and secondary clusters is inconsistent, and tso-1 cannot be directly used as the start-ts for the new TiCDC synchronization task in DC4.

+------------------+----------------+--------------------+--------------------+---------------------+
| ticdc_cluster_id | changefeed     | primary_ts         | secondary_ts       | created_at          |
+------------------+----------------+--------------------+--------------------+---------------------+
| default          | test-2         | 435953225454059520 | 435953235516456963 | 2022-09-13 08:40:15 |
+------------------+----------------+--------------------+--------------------+---------------------+
| username: eastfisher | Original post link

It should be necessary to find the closest pair of upstream and downstream TSOs before pausing the DC1 CDC task in the ts-map, use the downstream TSO to create a new CDC task in DC4, and then handle CDC data deduplication on the business side.

| username: TiDBer_yyy | Original post link

Thank you for understanding this solution. However, the current version is 5.0.4 of ticdc, and it seems that there is no ts-map.

Currently, the data format is canal-json. Can the es field be used as a unique identifier?

{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"40885458645-32642579611-87780805360-75786734419-72768045065-73753996681-89397932258-52312229413-39775267520-39666045879","id":"2517","k":"2918","pad":"65257871835-02336757793-35547215331-13506539015-36914329313"}],"old":[{"c":"60863650832-99507440173-07738309387-99422695339-12533914802-83346224518-76619046045-53817415661-47267488726-39986665474","id":"2517","k":"2499","pad":"38090510652-07702250434-08975824054-31762704218-35254676445"}]}
{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"36748437506-94556857415-69545915809-09888142405-70283253843-49398631621-75942281182-73213913728-69818887950-29633019257","id":"2492","k":"2521","pad":"90443775370-75210883269-83077322641-39372892294-63319191513"}],"old":[{"c":"98468727765-71098460514-17871547751-31115406523-51850727858-10040790503-10290411769-16980158605-47885080784-68064720857","id":"2492","k":"2521","pad":"90443775370-75210883269-83077322641-39372892294-63319191513"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"INSERT","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"09241211417-57691820593-79764562888-73842992267-41262887361-75389434110-77222691084-93085932883-64958287620-62880885482","id":"3082","k":"2525","pad":"01968840133-25459477374-54317852552-80338720400-75459953512"}],"old":[null]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"19934403057-34971673456-92467389935-20811426463-07174676987-01687857565-39759477338-48074877637-87372120758-58739047390","id":"2513","k":"2443","pad":"91046045506-19115897563-62460380646-15683524292-24522152238"}],"old":[{"c":"19934403057-34971673456-92467389935-20811426463-07174676987-01687857565-39759477338-48074877637-87372120758-58739047390","id":"2513","k":"2442","pad":"91046045506-19115897563-62460380646-15683524292-24522152238"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"13691623327-83005145805-83631958316-46123856772-15705105838-81081547402-36969822895-52100011575-05302806548-04773569224","id":"2525","k":"2500","pad":"43502600413-60335724659-82061816160-56772024807-72825945280"}],"old":[{"c":"89539530113-62301570253-03465359037-07700414367-29592947089-72988169921-24405068518-18749371172-06521742235-43439314539","id":"2525","k":"2500","pad":"43502600413-60335724659-82061816160-56772024807-72825945280"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"31844738662-77280625167-89838779753-82309602385-13504667772-77003535616-92995740179-47081762615-93223864315-86695008012","id":"2526","k":"2512","pad":"43150350394-96332674850-41785125532-36262395640-43793389427"}],"old":[{"c":"53678190762-55451278874-74926587698-38804022752-04443180180-72947618338-99192311629-59893116827-91530418203-30890060984","id":"2526","k":"2443","pad":"92877974684-89527046757-36449185964-90804075956-95335136960"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"19800640099-72376342866-33213366919-93251112341-11592349316-05428160136-50966466520-62185565371-26685956060-76118892153","id":"2501","k":"2508","pad":"65724646516-72204530063-58424860195-51681546933-07413984546"}],"old":[{"c":"19800640099-72376342866-33213366919-93251112341-11592349316-05428160136-50966466520-62185565371-26685956060-76118892153","id":"2501","k":"2507","pad":"65724646516-72204530063-58424860195-51681546933-07413984546"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"91841022490-65183271807-21614876311-66664605746-60061001119-83526200880-22194712722-38362928776-47160873623-16284071693","id":"2518","k":"2514","pad":"02776960888-51240075245-26601022106-62361518668-84400300030"}],"old":[{"c":"59692093312-98684014182-92278707377-13901172019-05189745761-74481492881-88246290598-76145176570-80122731207-28078526067","id":"2518","k":"2507","pad":"47391824939-71827390667-04782801007-92236607649-23469695120"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"76237400836-26598228655-56274420473-72854069621-84065681506-86585863725-29405215281-86584208215-22608850900-23663890960","id":"2515","k":"2524","pad":"70478713515-41875495154-54274875136-79390411375-64064974633"}],"old":[{"c":"19114950417-60274401512-57071685027-84229394553-33395311129-55171071778-06094682076-24592884977-00674909810-84805217223","id":"2515","k":"2524","pad":"70478713515-41875495154-54274875136-79390411375-64064974633"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"INSERT","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"51395841833-42708519690-24428794152-53103598315-46414085449-41809516665-92899184184-32358865054-46477233949-52153611433","id":"2700","k":"2491","pad":"41507791580-89054770416-96344423531-24818895616-34057255403"}],"old":[null]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"INSERT","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"83619892387-60417650883-46504242914-13504855781-85532307808-10348344631-18843486287-72948635733-31250818916-04555101442","id":"2793","k":"2990","pad":"92960521900-42498903363-35989631384-69359851013-30787494047"}],"old":[null]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"85868054889-07511000969-42911596530-35347725424-91827654051-65008702917-52332602968-51887664247-86684615430-15159347379","id":"2494","k":"2493","pad":"26792467021-01484785863-45100936154-18995444297-88478383488"}],"old":[{"c":"85868054889-07511000969-42911596530-35347725424-91827654051-65008702917-52332602968-51887664247-86684615430-15159347379","id":"2494","k":"2492","pad":"26792467021-01484785863-45100936154-18995444297-88478383488"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"21783985126-98084808691-43725135017-00387949361-85105783910-92737547188-11283879525-92430796799-13166349163-02664006206","id":"2488","k":"2077","pad":"59643692351-74416335562-47894835839-88859043548-07712245874"}],"old":[{"c":"63349236844-13583054515-08339603696-25291547972-51163040841-59747865941-96948615494-20282525241-79860160710-26595326278","id":"2488","k":"2077","pad":"59643692351-74416335562-47894835839-88859043548-07712245874"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"16504897463-94629584892-47554908881-67943009449-05387121982-35633085054-15144861214-28184515442-47633639648-85265902260","id":"2490","k":"2503","pad":"89931153172-94712087801-77595487791-16294616439-35316696813"}],"old":[{"c":"16504897463-94629584892-47554908881-67943009449-05387121982-35633085054-15144861214-28184515442-47633639648-85265902260","id":"2490","k":"2502","pad":"89931153172-94712087801-77595487791-16294616439-35316696813"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"57871650441-34609648749-61556302419-79611357656-50875938359-79757458909-16970262656-33870182681-42720015636-91822851929","id":"2508","k":"2520","pad":"06061504511-94466593959-02881086504-35429971081-06069653615"}],"old":[{"c":"04541088532-06053666798-15525136093-54266388636-51550468207-92515958213-33177020037-54853722639-80847268874-54395882618","id":"2508","k":"3384","pad":"19770731527-28035293328-91085620215-02508971454-90321757220"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"22328732408-84553755166-56481834696-46824407968-09221029728-65552763549-78652057272-99695904643-09959496299-77757396239","id":"2514","k":"2510","pad":"32495370377-75268095470-89531226896-49712844359-65742255463"}],"old":[{"c":"81494412638-69779651720-15842456299-87653785182-62338904225-70162278213-54689526797-76128833216-27926990918-97065712167","id":"2514","k":"2510","pad":"32495370377
| username: eastfisher | Original post link

The “es” field in canal-json messages is not globally unique, but it is globally incremental. It is still necessary for the business side to deduplicate the data during consumption.

| username: TiDBer_yyy | Original post link

Got it. This is a millisecond-level timestamp. If these timestamps are from two different TiDB clusters, will there be a significant difference?

| username: eastfisher | Original post link

Regarding the data synchronization delay between TiCDC upstream and downstream, you can refer to the relevant monitoring metrics for TiCDC Changefeed:

Under normal synchronization, there should not be a significant delay.

| username: dba-kit | Original post link

Is this cloud region migration within the same city or across cities? If it’s within the same city, you can directly expand or shrink the capacity, and the impact won’t be significant. If it’s across cities, for TiDB 6.X versions, you can consider using the placement rule in SQL method. First, add multiple learners to the TiKV nodes in the new data center. Once the data is fully synchronized, find a suitable time to switch all the table leaders to the new data center.

PS: For the 5.X version, the placement rule can only be modified through pd-ctl, so you’ll need to explore how to write it. You can refer to the configuration in this article:
专栏 - DR Auto-Sync 搭建和灾难恢复手册 | TiDB 社区

| username: TiDBer_yyy | Original post link

Cross-Region area, regional network bandwidth 40M, network latency 7ms
image

| username: dba-kit | Original post link

This must be treated as a cross-city setup due to the high network latency. You can study how to write Placement Rules and handle migration by adjusting the replica roles.

PS: The bandwidth between the two clusters is only 40M, which is too small… Generally, bandwidth refers to 40Mb, which is only 5MB/s. Moreover, since all database synchronization shares the same bandwidth, there will definitely be latency during peak synchronization periods (the exact amount depends on your business volume).
Generally speaking, fixed bandwidth costs much more than pay-as-you-go. The bandwidth limit for pay-as-you-go is usually very high. It seems you can evaluate the amount of data that needs to be synchronized and the daily increment. Pay-as-you-go might be more cost-effective with higher bandwidth.

| username: dba-kit | Original post link

PPS: The solutions I mentioned above are all based on Solution 1. In fact, Solution 2 has lower operational costs and is easier to understand. However, it is crucial to ensure proper permission protection for the new cluster to prevent any erroneous writes to the new DC during the migration period.

| username: TiDBer_yyy | Original post link

Thank you, indeed the actual test bandwidth is 5m/s.

  1. In the actual test of version 5.0.4, the Placement Rule cannot migrate as expected. (The current TiDB cluster has versions 4.0.16, 5.0.1, 5.0.4, and 5.0.5. I will test the other versions later).

Assuming: The total region data volume of a certain cluster is 20T, and it requires 48.55 days to transfer at a full bandwidth of 5M/s.

  1. Solution 2: ticdc master-slave synchronization. The cluster is heavily mixed, and each business needs to be migrated one by one. In the mixed-use database, some businesses do not support short-term (<300s) write stoppage.
   PS: For DBAs, this solution is low-cost; the most important thing is that it is controllable;
   The only downside is that some businesses do not support write stoppage during the migration period.
| username: dba-kit | Original post link

Let me ask again, is your initial data source written directly to TiDB, or is it synchronized from MySQL? If it comes from MySQL, you might consider starting with DM.

| username: TiDBer_yyy | Original post link

There are both:

  • Some services use a separate cluster with a relatively small data volume <1T and cannot stop writing; this should be usable.
  • Some clusters are mixed-use, with DM writing, business writing, and Flink writing.
| username: dba-kit | Original post link

If Flink is writing data, you should also check whether all tables have primary keys. I remember that the default generated table structure does not have primary keys. How to perform data verification is also an issue. :thinking:

| username: dba-kit | Original post link

If the real-time requirements are high, and the data is synchronized from MySQL, this part of the data can actually bypass TiCDC synchronization. Instead, synchronize the MySQL data directly in the new data center, and the TiDB in the new data center can synchronize from the new data center’s MySQL via DM. This approach can also reduce some bandwidth consumption. During the final migration, you only need to handle the MySQL switch.

| username: dba-kit | Original post link

For other clusters that can allow long periods of write stoppage, there are actually many more flexible options available.

| username: TiDBer_yyy | Original post link

It’s a good idea to have a separate cluster where the business writes directly to TiDB, and to deploy a separate cluster for core business. This opens up new possibilities.

| username: dba-kit | Original post link

Could you share the rule configuration you wrote? According to the 5.0 documentation, it also supports adding non-voting follower nodes for all tables.

| username: TiDBer_yyy | Original post link

Original cluster configuration

global:
  user: tidb
  ssh_port: 22
  deploy_dir: /data/tidb-deploy
  data_dir: /data/tidb-data/
  os: linux
  arch: amd64
monitored:
  node_exporter_port: 39100
  blackbox_exporter_port: 39115
  deploy_dir: /data/tidb-deploy/monitor-39100
  data_dir: /data/tidb-data/monitor_data
  log_dir: /data/tidb-deploy/monitor-39100/log
server_configs:
  tidb:
    oom-use-tmp-storage: true
    performance.max-procs: 0
    performance.txn-total-size-limit: 2097152
    prepared-plan-cache.enabled: true
    tikv-client.copr-cache.capacity-mb: 128.0
    tikv-client.max-batch-wait-time: 0
    tmp-storage-path: /data/tidb-data/tmp_oom
    split-table: true
  tikv:
    coprocessor.split-region-on-table: true
    readpool.coprocessor.use-unified-pool: true
    readpool.storage.use-unified-pool: false
    server.grpc-compression-type: none
    storage.block-cache.shared: true
  pd:
    enable-cross-table-merge: false
    replication.enable-placement-rules: true
    schedule.leader-schedule-limit: 4
    schedule.region-schedule-limit: 2048
    schedule.replica-schedule-limit: 64
    replication.location-labels: ["dc","logic","rack","host"]
  tiflash: {}
  tiflash-learner: {}
  pump: {}
  drainer: {}
  cdc: {}
tidb_servers:
- host: 192.168.8.11
  ssh_port: 22
  port: 4000
  status_port: 10080
  deploy_dir: /data/tidb-deploy/tidb_4000
 
 
tikv_servers:
- host: 192.168.8.11
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data/tidb-deploy/tikv_20160
  data_dir: /data/tidb-data/tikv_20160
 
 
- host: 192.168.8.12
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data/tidb-deploy/tikv_20160
  data_dir: /data/tidb-data/tikv_20160
   
 
- host: 192.168.8.13
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data/tidb-deploy/tikv_20160
  data_dir: /data/tidb-data/tikv_20160
   
 
pd_servers:
- host: 192.168.8.11
  ssh_port: 22
  name: pd-192.168.8.11-2379
  client_port: 2379
  peer_port: 2380
  deploy_dir: /data/tidb-deploy/pd_2379
  data_dir: /data/tidb-data/pd_2379

Modify Cluster Labels

tiup cluster edit-config tidb_placement_rule_remove
# Add label configuration for each TiKV node:
  config:
    server.labels: { dc: "bj1", zone: "1", rack: "1", host: "192.168.8.11_20160" }
   
  config:
    server.labels: { dc: "bj1", zone: "1", rack: "1", host: "192.168.8.12_20160" }
   
   
  config:
    server.labels: { dc: "bj1", zone: "1", rack: "1", host: "192.168.8.13_20160" }     
 
 
# Apply configuration changes
tiup cluster reload tidb_placement_rule_remove -R tikv -y

Scale-out New Data Center TiKV

tiup cluster scale-out tidb_placement_rule_remove scale-out-pr-test.yaml -u root -p
  • Configuration file
tikv_servers:
 - host: 192.168.8.12
   ssh_port: 22
   port: 20161
   status_port: 20181
   deploy_dir: /data/tidb-deploy/tikv_20161
   data_dir: /data/tidb-data/tikv_20161
   config:
     server.labels: { dc: "bj4",logic: "2",rack: "2",host: "192.168.8.12_20161" }
 - host: 192.168.8.13
   ssh_port: 22
   port: 20161
   status_port: 20181
   deploy_dir: /data/tidb-deploy/tikv_20161
   data_dir: /data/tidb-data/tikv_20161
   config:
     server.labels: { dc: "bj4",logic: "2",rack: "2",host: "192.168.8.13_20161" }
 
 - host: 192.168.8.14
   ssh_port: 22
   port: 20161
   status_port: 20181
   deploy_dir: /data/tidb-deploy/tikv_20161
   data_dir: /data/tidb-data/tikv_20161
   config:
     server.labels: { dc: "bj4",logic: "2",rack: "2",host: "192.168.8.14_20161" }
  • After scaling out, you can see that a follower region is scheduled to the 192.168.8.12:20161 machine
SELECT  region.TABLE_NAME,  tikv.address,  case when region.IS_INDEX = 1 then "index" else "data" end as "region-type",  case when peer.is_leader = 1 then region.region_id end as "leader",
 case when peer.is_leader = 0 then region.region_id end as "follower",  case when peer.IS_LEARNER = 1 then region.region_id end as "learner"
FROM  information_schema.tikv_store_status tikv,  information_schema.tikv_region_peers peer, 
(SELECT * FROM information_schema.tikv_region_status where DB_NAME='test' and TABLE_NAME='sbtest1' and IS_INDEX=0) region
WHERE   region.region_id = peer.region_id  AND peer.store_id = tikv.store_id order by 1,3;
 
+------------+--------------------+-------------+--------+----------+---------+
| TABLE_NAME | address            | region-type | leader | follower | learner |
+------------+--------------------+-------------+--------+----------+---------+
| sbtest1    | 192.168.8.13:20160 | data        |   NULL |       16 |    NULL |
| sbtest1    | 192.168.8.11:20160 | data        |   NULL |       16 |    NULL |
| sbtest1    | 192.168.8.12:20160 | data        |     16 |     NULL |    NULL |
+------------+--------------------+-------------+--------+----------+---------+
3 rows in set (0.02 sec)

Configure Placement Rule

  • Data center dc-bj1 has 3 voters
  • Data center dc-bj2 has 2 followers
cat > rules.json <<EOF
[{
  "group_id": "pd",
  "group_index": 0,
  "group_override": false,
  "rules": [
    {
        "group_id": "pd",
        "id": "dc-bj1",
        "start_key": "",
        "end_key": "",
        "role": "voter",
        "count": 3,
        "label_constraints": [
            {"key": "dc", "op": "in", "values": ["bj1"]}
        ],
        "location_labels": ["dc"]
    },
    {
        "group_id": "pd",
        "id": "dc-bj4",
        "start_key": "",
        "end_key": "",
        "role": "follower",
        "count": 2,
        "label_constraints": [
            {"key": "dc", "op": "in", "values": ["bj4"]}
        ],
        "location_labels": ["dc"]
    }
]
}
]
EOF

Apply Placement rule

tiup ctl:v5.0.4 pd --pd=http://127.0.0.1:2379 config placement-rules rule-bundle save --in=rules.json

Check Region Distribution Status

You can see that the regions are scheduled as expected, and the “bj4” data center has not been assigned a leader, currently acting as a follower.

MySQL [(none)]> SELECT  region.TABLE_NAME,  tikv.address,  case when region.IS_INDEX = 1 then "index" else "data" end as "region-type",  case when peer.is_leader = 1 then region.region_id end as "leader",  case when peer.is_leader = 0 then region.region_id end as "follower",  case when peer.IS_LEARNER = 1 then region.region_id end as "learner" FROM  information_schema.tikv_store_status tikv,  information_schema.tikv_region_peers peer,  (SELECT * FROM information_schema.tikv_region_status where DB_NAME='test' and TABLE_NAME='sbtest1' and IS_INDEX=0) region WHERE   region.region_id = peer.region_id  AND peer.store_id = tikv.store_id order by 1,3;
+------------+--------------------+-------------+--------+----------+---------+
| TABLE_NAME | address            | region-type | leader | follower | learner |
+------------+--------------------+-------------+--------+----------+---------+
| sbtest1    | 192.168.8.11:20160 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.12:20161 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.14:20161 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.13:20160 | data        |      3 |     NULL |    NULL |
| sbtest1    | 192.168.8.12:20160 | data        |   NULL |        3 |    NULL |
+------------+--------------------+-------------+--------+----------+---------+
  • Cluster Placement-rule switch: No region leader elected for bj4
[tidb@centos1 deploy]$ tiup ctl:v5.0.4 pd --pd=http://127.0.0.1:2379 config placement-rules rule-bundle save --in=rules.json
[tidb@centos1 deploy]$ tiup ctl:v5.0.4 pd --pd=http://127.0.0.1:2379 config placement-rules show
 
MySQL [(none)]> SELECT  region.TABLE_NAME,  tikv.address,  case when region.IS_INDEX = 1 then "index" else "data" end as "region-type",  case when peer.is_leader = 1 then region.region_id end as "leader",   case when peer.is_leader = 0 then region.region_id end as "follower",  case when peer.IS_LEARNER = 1 then region.region_id end as "learner"  FROM  information_schema.tikv_store_status tikv,  information_schema.tikv_region_peers peer,   (SELECT * FROM information_schema.tikv_region_status where DB_NAME='test' and TABLE_NAME='sbtest1' and IS_INDEX=0) region  WHERE   region.region_id = peer.region_id  AND peer.store_id = tikv.store_id order by 1,3;
+------------+--------------------+-------------+--------+----------+---------+
| TABLE_NAME | address            | region-type | leader | follower | learner |
+------------+--------------------+-------------+--------+----------+---------+
| sbtest1    | 192.168.8.13:20160 | data        |      3 |     NULL |    NULL |
| sbtest1    | 192.168.8.12:20160 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.11:20160 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.12:20161 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.14:20161 | data        |   NULL |        3 |    NULL |
+------------+--------------------+-------------+--------+----------+---------+
5 rows in set (0.01 sec)