Parameter Viewing Issues

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 参数查看问题

| username: zhanggame1

[TiDB Usage Environment Testing/
[TiDB Version]
[Reproduction Path] What operations were performed to encounter the issue
[Encountered Issue: Issue Phenomenon and Impact]

Official Documentation Parameter

max-replicas

The question is where can this parameter be found. I installed a TiKV test machine and want to check if this parameter is set to 1.

  • Total number of replicas, i.e., the sum of leader and follower numbers. The default is 3, which means 1 leader and 2 followers. When this configuration is modified online, PD will adjust in the background to ensure the number of Region replicas matches the configuration.
  • Default value: 3
| username: ShawnYan | Original post link

https://docs.pingcap.com/zh/tidb/stable/dynamic-config#online-modification-of-pd-configuration

Refer to the official documentation show config

| username: zhanggame1 | Original post link

Thank you, I found it. It is indeed still the default 3, even though there is only one TiKV node.

| username: Kongdom | Original post link

What method did you use to search before? The first two results in the community search should be the ones. :thinking:

| username: zhanggame1 | Original post link

The show config command can be used to check the configuration. After modifying it, my understanding is that theoretically, using the set command to change it should save it to the PD configuration file, but I didn’t see it.

| username: ShawnYan | Original post link

Well, it exists in PD, which is different from the configuration file.

| username: zhanggame1 | Original post link

I don’t understand this part. There is a parameter file, and I have also changed it with SET. If there is an inconsistency, how can I determine the inconsistency, and which one will take effect in the end?

| username: tidb菜鸟一只 | Original post link

The parameters modified by set config need to be synchronized with the cluster parameters using tiup cluster edit-config to take effect permanently. Otherwise, the cluster parameters set will be reset upon restarting or reloading the cluster.

| username: zhanggame1 | Original post link

The test is exactly the opposite. For example, I set max-replicas to 1 in tiup cluster edit-config.
image
After setting it to 3 and restarting the cluster, you can see that it is 3.

| username: Kongdom | Original post link

Does tiup cluster reload report an error? That - host feels strange.

| username: zhanggame1 | Original post link

No errors.

| username: Kongdom | Original post link

Is it the same situation if the max-replicas configuration item is placed in the first entry?

| username: tidb菜鸟一只 | Original post link

Set it to 2, configure the cluster to 1, and then restart to see… It seems that your cluster configuration did not take effect, so it defaults to 3…

| username: Kongdom | Original post link

:wink: I think it’s that - host. I’m not sure if it will parse the max-replicas configuration as a configuration under host.

| username: zhanggame1 | Original post link

Test again with a newly installed cluster, without changing any parameters, only modifying max-replica.


After restarting, max-replica is still 2, and it will not become invalid just because the cluster parameters were not modified.

| username: zhanggame1 | Original post link

Single machine deployment test, here are the initialization parameters:

[root@tidb /]# cat topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
   replication.location-labels: ["host"]

pd_servers:
 - host: 10.0.0.26

tidb_servers:
 - host: 10.0.0.26

tikv_servers:
 - host: 10.0.0.26
   port: 20160
   status_port: 20180
   config:
     server.labels: { host: "logic-host-1" }

 - host: 10.0.0.26
   port: 20161
   status_port: 20181
   config:
     server.labels: { host: "logic-host-2" }

 - host: 10.0.0.26
   port: 20162
   status_port: 20182
   config:
     server.labels: { host: "logic-host-3" }

monitoring_servers:
 - host: 10.0.0.26

grafana_servers:
 - host: 10.0.0.26

Here is the configuration file after modifying max-replicas=2 using set:

[root@tidb /]# tiup cluster  show-config tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.5/tiup-cluster show-config tidb-test
global:
  user: tidb
  ssh_port: 22
  ssh_type: builtin
  deploy_dir: /tidb-deploy
  data_dir: /tidb-data
  os: linux
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
  deploy_dir: /tidb-deploy/monitor-9100
  data_dir: /tidb-data/monitor-9100
  log_dir: /tidb-deploy/monitor-9100/log
server_configs:
  tidb:
    log.slow-threshold: 300
  tikv:
    readpool.coprocessor.use-unified-pool: true
    readpool.storage.use-unified-pool: false
  pd:
    replication.enable-placement-rules: true
    replication.location-labels:
    - host
  tidb_dashboard: {}
  tiflash: {}
  tiflash-learner: {}
  pump: {}
  drainer: {}
  cdc: {}
  kvcdc: {}
  grafana: {}
tidb_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 4000
  status_port: 10080
  deploy_dir: /tidb-deploy/tidb-4000
  log_dir: /tidb-deploy/tidb-4000/log
  arch: amd64
  os: linux
tikv_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /tidb-deploy/tikv-20160
  data_dir: /tidb-data/tikv-20160
  log_dir: /tidb-deploy/tikv-20160/log
  config:
    server.labels:
      host: logic-host-1
  arch: amd64
  os: linux
- host: 10.0.0.26
  ssh_port: 22
  port: 20161
  status_port: 20181
  deploy_dir: /tidb-deploy/tikv-20161
  data_dir: /tidb-data/tikv-20161
  log_dir: /tidb-deploy/tikv-20161/log
  config:
    server.labels:
      host: logic-host-2
  arch: amd64
  os: linux
- host: 10.0.0.26
  ssh_port: 22
  port: 20162
  status_port: 20182
  deploy_dir: /tidb-deploy/tikv-20162
  data_dir: /tidb-data/tikv-20162
  log_dir: /tidb-deploy/tikv-20162/log
  config:
    server.labels:
      host: logic-host-3
  arch: amd64
  os: linux
tiflash_servers: []
pd_servers:
- host: 10.0.0.26
  ssh_port: 22
  name: pd-10.0.0.26-2379
  client_port: 2379
  peer_port: 2380
  deploy_dir: /tidb-deploy/pd-2379
  data_dir: /tidb-data/pd-2379
  log_dir: /tidb-deploy/pd-2379/log
  arch: amd64
  os: linux
monitoring_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 9090
  ng_port: 12020
  deploy_dir: /tidb-deploy/prometheus-9090
  data_dir: /tidb-data/prometheus-9090
  log_dir: /tidb-deploy/prometheus-9090/log
  external_alertmanagers: []
  arch: amd64
  os: linux
grafana_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 3000
  deploy_dir: /tidb-deploy/grafana-3000
  arch: amd64
  os: linux
  username: admin
  password: admin
  anonymous_enable: false
  root_url: ""
  domain: ""

tikv configuration file:

[root@tidb conf]# cat tikv.toml
# WARNING: This file is auto-generated. Do not edit! All your modification will be overwritten!
# You can use 'tiup cluster edit-config' and 'tiup cluster reload' to update the configuration
# All configuration items you want to change can be added to:
# server_configs:
#   tikv:
#     aa.b1.c3: value
#     aa.b2.c4: value
[readpool]
[readpool.coprocessor]
use-unified-pool = true
[readpool.storage]
use-unified-pool = false

[server]
[server.labels]
host = "logic-host-1"

pd parameter file:

[root@tidb conf]# cat pd.toml
# WARNING: This file is auto-generated. Do not edit! All your modification will be overwritten!
# You can use 'tiup cluster edit-config' and 'tiup cluster reload' to update the configuration
# All configuration items you want to change can be added to:
# server_configs:
#   pd:
#     aa.b1.c3: value
#     aa.b2.c4: value
[replication]
enable-placement-rules = true
location-labels = ["host"]

pd-ctl shows that it has indeed been changed to 2:

» config show replication
{
  "max-replicas": 2,
  "location-labels": "host",
  "strictly-match-label": "false",
  "enable-placement-rules": "true",
  "enable-placement-rules-cache": "false",
  "isolation-level": ""
}

| username: tidb菜鸟一只 | Original post link

I made a mistake, I thought the max-replica parameter was a TiKV parameter. Only TiKV parameters need to be reconfigured through tiup after being set. The max-replica parameter is a PD parameter, and for PD, it can be modified online. Once successfully modified, it will be persisted to etcd and will not be persisted to the configuration file. Subsequent configurations will be based on the settings in etcd.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.