TIDB 6.2.0 alter placement policy error 8243

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: TIDB6.2.0 alter placement policy报错 8243

| username: tjdagang1

[TiDB Usage Environment] Production Environment / Testing / Poc
[TiDB Version] V6.2.0
[Encountered Problem] TIDB6.2.0 encountered error 8243 when referring to TiDB 6.x in Action for two-city three-center alter placement policy.
Reference link: 基于 TiDB v6.0 部署两地三中心 | TiDB Books
[Reproduction Path] Operations performed that led to the problem
[Problem Phenomenon and Impact]
Cluster status:


Rules:

Error:

[Attachments]

Please provide the version information of each component, such as cdc/tikv, which can be obtained by executing cdc version/tikv-server --version.

| username: h5n1 | Original post link

The leader occupies one replica in a non-sjz data center, with four non-sjz followers, and one replica each in sjz, eur, and ame, totaling seven replicas. With five TiKV nodes and one down, it doesn’t meet the configuration requirements. Try changing the non-sjz follower to one first.

| username: tjdagang1 | Original post link

All nodes have been started with 7 TiKV, and the flower has been changed to 1, but error 8243 is still reported.

[tidb@centos ~]$ tiup cluster display tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v6.2.0
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.58.133:2379/dashboard
Grafana URL:        http://192.168.58.133:3000
ID                    Role          Host            Ports        OS/Arch       Status   Data Dir                                        Deploy Dir
--                    ----          ----            -----        -------       ------   --------                                        ----------
192.168.58.133:9093   alertmanager  192.168.58.133  9093/9094    linux/x86_64  Up       /home/tidb/cluster/tidb-data/alertmanager-9093  /home/tidb/cluster/tidb-deploy/alertmanager-9093
192.168.58.133:3000   grafana       192.168.58.133  3000         linux/x86_64  Up       -                                               /home/tidb/cluster/tidb-deploy/grafana-3000
192.168.58.133:2379   pd            192.168.58.133  2379/2380    linux/x86_64  Up|L|UI  /home/tidb/cluster/tidb-data/pd-2379            /home/tidb/cluster/tidb-deploy/pd-2379
192.168.58.133:9090   prometheus    192.168.58.133  9090/12020   linux/x86_64  Up       /home/tidb/cluster/tidb-data/prometheus-9090    /home/tidb/cluster/tidb-deploy/prometheus-9090
192.168.58.133:4000   tidb          192.168.58.133  4000/10080   linux/x86_64  Up       -                                               /home/tidb/cluster/tidb-deploy/tidb-4000
192.168.58.133:20160  tikv          192.168.58.133  20160/20180  linux/x86_64  Up       /home/tidb/cluster/tidb-data/tikv-20160         /home/tidb/cluster/tidb-deploy/tikv-20160
192.168.58.133:20161  tikv          192.168.58.133  20161/20181  linux/x86_64  Up       /home/tidb/cluster/tidb-data/tikv-20161         /home/tidb/cluster/tidb-deploy/tikv-20161
192.168.58.133:20162  tikv          192.168.58.133  20162/20182  linux/x86_64  Up       /home/tidb/cluster/tidb-data/tikv-20162         /home/tidb/cluster/tidb-deploy/tikv-20162
192.168.58.133:20163  tikv          192.168.58.133  20163/20183  linux/x86_64  Up       /home/tidb/cluster/tidb-data/tikv-20163         /home/tidb/cluster/tidb-deploy/tikv-20163
192.168.58.133:20164  tikv          192.168.58.133  20164/20184  linux/x86_64  Up       /home/tidb/cluster/tidb-data/tikv-20164         /home/tidb/cluster/tidb-deploy/tikv-20164
192.168.58.133:20165  tikv          192.168.58.133  20165/20185  linux/x86_64  Up       /home/tidb/cluster/tidb-data/tikv-20165         /home/tidb/cluster/tidb-deploy/tikv-20165
192.168.58.133:20166  tikv          192.168.58.133  20166/20186  linux/x86_64  Up       /home/tidb/cluster/tidb-data/tikv-20166         /home/tidb/cluster/tidb-deploy/tikv-20166
Total nodes: 12
[tidb@centos ~]$

[root@centos ~]# mysql -h192.168.58.133 -P4000 -uroot -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 405
Server version: 5.7.25-TiDB-v6.2.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> use crm
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [crm]> SHOW PLACEMENT LABELS;
+------+-------------------------------------------------------------------------------+
| Key  | Values                                                                        |
+------+-------------------------------------------------------------------------------+
| area | ["america", "europe", "northern"]                                             |
| dc   | ["bj1", "bj2", "germany", "sjz", "usa"]                                       |
| host | ["host100", "host101", "host102", "host103", "host104", "host105", "host106"] |
| rack | ["r1", "r2"]                                                                  |
+------+-------------------------------------------------------------------------------+
4 rows in set (0.00 sec)

MySQL [crm]> show placement;
+------------------------+---------------------------------------------------------------------------------------------------------------+------------------+
| Target                 | Placement                                                                                                     | Scheduling_State |
+------------------------+---------------------------------------------------------------------------------------------------------------+------------------+
| POLICY northernpolicy  | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | NULL             |
| DATABASE crm           | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING          |
| TABLE crm.m_cust_data  | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING          |
| TABLE crm.m_cust_label | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING          |
| TABLE crm.m_cust_main  | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING          |
| TABLE crm.m_cust_org   | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING          |
| TABLE crm.m_seed       | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING          |
+------------------------+---------------------------------------------------------------------------------------------------------------+------------------+
7 rows in set (0.00 sec)

MySQL [crm]> select * from information_schema.placement_policies;
+-----------+--------------+----------------+----------------+---------+-------------+--------------------------+------------------------------------------+---------------------+----------+-----------+----------+
| POLICY_ID | CATALOG_NAME | POLICY_NAME    | PRIMARY_REGION | REGIONS | CONSTRAINTS | LEADER_CONSTRAINTS       | FOLLOWER_CONSTRAINTS                     | LEARNER_CONSTRAINTS | SCHEDULE | FOLLOWERS | LEARNERS |
+-----------+--------------+----------------+----------------+---------+-------------+--------------------------+------------------------------------------+---------------------+----------+-----------+----------+
|         2 | def          | northernpolicy |                |         |             | [+area=northern,-dc=sjz] | {"+area=northern,-dc=sjz": 4,+dc=sjz: 1} |                     |          |         2 |        0 |
+-----------+--------------+----------------+----------------+---------+-------------+--------------------------+------------------------------------------+---------------------+----------+-----------+----------+
1 row in set (0.00 sec)

MySQL [crm]> select a.region_id,a.peer_id,a.store_id,a.is_leader,b.address,b.label  from INFORMATION_SCHEMA.TIKV_REGION_PEERS a
    -> left join INFORMATION_SCHEMA.TIKV_STORE_STATUS b on a.store_id =b.store_id
    -> where a.region_id =218;
+-----------+---------+----------+-----------+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
| region_id | peer_id | store_id | is_leader | address              | label                                                                                                                                      |
+-----------+---------+----------+-----------+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
|       218 |     219 |        1 |         1 | 192.168.58.133:20163 | [{"key": "area", "value": "northern"}, {"key": "rack", "value": "r2"}, {"key": "host", "value": "host103"}, {"key": "dc", "value": "bj2"}] |
|       218 |     220 |        2 |         0 | 192.168.58.133:20160 | [{"key": "area", "value": "northern"}, {"key": "rack", "value": "r1"}, {"key": "host", "value": "host100"}, {"key": "dc", "value": "bj1"}] |
|       218 |     221 |        8 |         0 | 192.168.58.133:20164 | [{"key": "area", "value": "northern"}, {"key": "rack", "value": "r1"}, {"key": "host", "value": "host104"}, {"key": "dc", "value": "sjz"}] |
|       218 |     222 |        9 |         0 | 192.168.58.133:20161 | [{"key": "area", "value": "northern"}, {"key": "rack", "value": "r2"}, {"key": "host", "value": "host101"}, {"key": "dc", "value": "bj1"}] |
|       218 |     223 |        7 |         0 | 192.168.58.133:20162 | [{"key": "area", "value": "northern"}, {"key": "rack", "value": "r1"}, {"key": "host", "value": "host102"}, {"key": "dc", "value": "bj2"}] |
+-----------+---------+----------+-----------+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+
5 rows in set (0.00 sec)

MySQL [crm]> show create PLACEMENT POLICY northernpolicy;
+----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Policy         | Create Policy                                                                                                                                          |
+----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| northernpolicy | CREATE PLACEMENT POLICY `northernpolicy` LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" |
+----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

MySQL [crm]> ALTER  PLACEMENT POLICY northernpolicy LEADER_CONSTRAINTS='[+area=northern,-dc=sjz]' FOLLOWER_CONSTRAINTS='{"+area=northern,-dc=sjz": 4,+dc=sjz: 1,+dc=europe: 1,+dc=america: 1}';
ERROR 8243 (HY000): "[PD:placement:ErrRuleContent]invalid rule content, rule 'table_rule_72_3' from rule group 'TiDB_DDL_72' can not match any store"

MySQL [crm]> ALTER  PLACEMENT POLICY northernpolicy LEADER_CONSTRAINTS='[+area=northern,-dc=sjz]' FOLLOWER_CONSTRAINTS='{"+area=northern,-dc=sjz": 3,+dc=sjz: 1,+dc=europe: 1,+dc=america: 1}';
ERROR 8243 (HY000): "[PD:placement:ErrRuleContent]invalid rule content, rule 'table_rule_72_3' from rule group 'TiDB_DDL_72' can not match any store"

MySQL [crm]> ALTER  PLACEMENT POLICY northernpolicy LEADER_CONSTRAINTS='[+area=northern,-dc=sjz]' FOLLOWER_CONSTRAINTS='{"+area=northern,-dc=sjz": 1,+dc=sjz: 1,+dc=europe: 1,+dc=america: 1}';
ERROR 8243 (HY000): "[PD:placement:ErrRuleContent]invalid rule content, rule 'table_rule_72_2' from rule group 'TiDB_DDL_72' can not match any store"
| username: h5n1 | Original post link

There is no +dc=europe: 1, +dc=america: 1 in the label either.

| username: tjdagang1 | Original post link

What you said I don’t quite understand. Is that result the one before adding Europe and America?

| username: h5n1 | Original post link

Does your TiKV label have {“key”: “dc”, “value”: “europe”} or america? If there is no such TiKV, how does it match +dc=europe: 1, +dc=america: 1?

| username: tjdagang1 | Original post link

Of course, there are settings.
[tidb@centos ~]$ tiup cluster show-config tidb-test
tiup is checking updates for component cluster …
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster show-config tidb-test
global:
user: tidb
ssh_port: 22
ssh_type: builtin
deploy_dir: /home/tidb/cluster/tidb-deploy
data_dir: /home/tidb/cluster/tidb-data
os: linux
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
deploy_dir: /home/tidb/cluster/tidb-deploy/monitor-9100
data_dir: /home/tidb/cluster/tidb-data/monitor-9100
log_dir: /home/tidb/cluster/tidb-deploy/monitor-9100/log
server_configs:
tidb:
binlog.enable: false
binlog.ignore-error: false
log.slow-threshold: 300
tikv:
readpool.coprocessor.use-unified-pool: true
readpool.storage.use-unified-pool: false
server.grpc-compression-type: gzip
pd:
replication.location-labels:
- area
- dc
- rack
- host
schedule.leader-schedule-limit: 4
schedule.region-schedule-limit: 2048
schedule.replica-schedule-limit: null
schedule.tolerant-size-ratio: 20.0
tiflash: {}
tiflash-learner: {}
pump: {}
drainer: {}
cdc: {}
grafana: {}
tidb_servers:

  • host: 192.168.58.133
    ssh_port: 22
    port: 4000
    status_port: 10080
    deploy_dir: /home/tidb/cluster/tidb-deploy/tidb-4000
    log_dir: /home/tidb/cluster/tidb-deploy/tidb-4000/log
    arch: amd64
    os: linux
    tikv_servers:
  • host: 192.168.58.133
    ssh_port: 22
    port: 20160
    status_port: 20180
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20160
    data_dir: /home/tidb/cluster/tidb-data/tikv-20160
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20160/log
    config:
    server.labels:
    area: northern
    dc: bj1
    host: host100
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20161
    status_port: 20181
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20161
    data_dir: /home/tidb/cluster/tidb-data/tikv-20161
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20161/log
    config:
    server.labels:
    area: northern
    dc: bj1
    host: host101
    rack: r2
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20162
    status_port: 20182
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20162
    data_dir: /home/tidb/cluster/tidb-data/tikv-20162
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20162/log
    config:
    server.labels:
    area: northern
    dc: bj2
    host: host102
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20163
    status_port: 20183
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20163
    data_dir: /home/tidb/cluster/tidb-data/tikv-20163
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20163/log
    config:
    server.labels:
    area: northern
    dc: bj2
    host: host103
    rack: r2
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20164
    status_port: 20184
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20164
    data_dir: /home/tidb/cluster/tidb-data/tikv-20164
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20164/log
    config:
    server.labels:
    area: northern
    dc: sjz
    host: host104
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20165
    status_port: 20185
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20165
    data_dir: /home/tidb/cluster/tidb-data/tikv-20165
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20165/log
    config:
    server.labels:
    area: europe
    dc: germany
    host: host105
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20166
    status_port: 20186
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20166
    data_dir: /home/tidb/cluster/tidb-data/tikv-20166
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20166/log
    config:
    server.labels:
    area: america
    dc: usa
    host: host106
    rack: r1
    arch: amd64
    os: linux
    tiflash_servers:
    pd_servers:
  • host: 192.168.58.133
    ssh_port: 22
    name: pd-192.168.58.133-2379
    client_port: 2379
    peer_port: 2380
    deploy_dir: /home/tidb/cluster/tidb-deploy/pd-2379
    data_dir: /home/tidb/cluster/tidb-data/pd-2379
    log_dir: /home/tidb/cluster/tidb-deploy/pd-2379/log
    arch: amd64
    os: linux
    monitoring_servers:
  • host: 192.168.58.133
    ssh_port: 22
    port: 9090
    ng_port: 12020
    deploy_dir: /home/tidb/cluster/tidb-deploy/prometheus-9090
    data_dir: /home/tidb/cluster/tidb-data/prometheus-9090
    log_dir: /home/tidb/cluster/tidb-deploy/prometheus-9090/log
    external_alertmanagers:
    arch: amd64
    os: linux
    grafana_servers:
  • host: 192.168.58.133
    ssh_port: 22
    port: 3000
    deploy_dir: /home/tidb/cluster/tidb-deploy/grafana-3000
    arch: amd64
    os: linux
    username: admin
    password: admin
    anonymous_enable: false
    root_url: “”
    domain: “”
    alertmanager_servers:
  • host: 192.168.58.133
    ssh_port: 22
    web_port: 9093
    cluster_port: 9094
    deploy_dir: /home/tidb/cluster/tidb-deploy/alertmanager-9093
    data_dir: /home/tidb/cluster/tidb-data/alertmanager-9093
    log_dir: /home/tidb/cluster/tidb-deploy/alertmanager-9093/log
    arch: amd64
    os: linux
    [tidb@centos ~]$
| username: wuxiangdong | Original post link

America is not in the DC tag, it’s in the area tag.

| username: tjdagang1 | Original post link

Indeed, the SQL script in the documentation does not correspond to the area and dc values in the config.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.