Issue with Placement Rules Configuration Not Taking Effect

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: Placement Rules 配置不生效问题

| username: TiDBer_jQ7mFu99

【TiDB Environment】Production
【TiDB Version】v6.1.0
【Issue Encountered】Placement rules configuration not taking effect
【Reproduction Path】Operations performed that led to the issue
【Issue Phenomenon and Impact】

  1. Initially, when using tiup, the region option for tikv was not configured. This option was added later.
    Current configuration:

The datamid rule is applied to the bucket_info and cdn_relation tables,


For the bucket_info table, the LEADER_STORE_ID has been scheduled (not sure if it was scheduled or due to frequent modifications and restarts of tikv), but the number of PEERS is still 5.

For the cdn_relation table, the LEADER_STORE_ID is still on tikv nodes in other regions.

This is the output of pd-ctl config show:
{
“replication”: {
“enable-placement-rules”: “true”,
“enable-placement-rules-cache”: “false”,
“isolation-level”: “”,
“location-labels”: “cloud,zone,rack,host”,
“max-replicas”: 5,
“strictly-match-label”: “false”
},
“schedule”: {
“enable-cross-table-merge”: “true”,
“enable-joint-consensus”: “true”,
“high-space-ratio”: 0.7,
“hot-region-cache-hits-threshold”: 3,
“hot-region-schedule-limit”: 4,
“hot-regions-reserved-days”: 7,
“hot-regions-write-interval”: “10m0s”,
“leader-schedule-limit”: 4,
“leader-schedule-policy”: “count”,
“low-space-ratio”: 0.8,
“max-merge-region-keys”: 200000,
“max-merge-region-size”: 20,
“max-pending-peer-count”: 64,
“max-snapshot-count”: 64,
“max-store-down-time”: “30m0s”,
“max-store-preparing-time”: “48h0m0s”,
“merge-schedule-limit”: 8,
“patrol-region-interval”: “10ms”,
“region-schedule-limit”: 2048,
“region-score-formula-version”: “v2”,
“replica-schedule-limit”: 64,
“split-merge-interval”: “1h0m0s”,
“tolerant-size-ratio”: 0
}
}

| username: TiDBer_jQ7mFu99 | Original post link

The “in progress” status has been ongoing for a while (over an hour), and the amount of data in the table is not very large.

| username: xiaohetao | Original post link

Was it configured during the cluster initialization or after the initialization?

| username: xiaohetao | Original post link

If the configuration is added after initialization, it needs to be saved.

| username: xiaohetao | Original post link

Post the parameter configuration file (similar to rules.json) and let’s take a look.

| username: h5n1 | Original post link

It’s missing “region”. Use pd-ctl config set to add it.

| username: TiDBer_jQ7mFu99 | Original post link

The region label is added to TiKV after initialization and then restarted.

| username: TiDBer_jQ7mFu99 | Original post link

I added it in tiup edit and then reloaded pd with tiup reload, but it doesn’t seem to take effect.

| username: TiDBer_jQ7mFu99 | Original post link

Are these the configurations? The rest basically follows the documentation without much change, only some directories were modified.

| username: TiDBer_jQ7mFu99 | Original post link

Configuration has been added, and placement has been re-added to the table, but it is still INPROGRESS.

| username: h5n1 | Original post link

Use pd-ctl store to check the label settings of TiKV.

| username: TiDBer_jQ7mFu99 | Original post link

There are tags for regions.

| username: Lucien-卢西恩 | Original post link

The PD parameter also needs to be configured through PD-ctl. It seems that configuring it in edit config did not take effect. You can check the configuration using pd-ctl config show all. This configuration is loaded when the cluster is created for the first time and is then persisted in etcd, so you need to use pd-ctl to modify it.