Be a Moocher: Can Someone Provide Me with a Valid Configuration Example for TiUP?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 做个伸手党:谁能赐我一个tiup的有效配置示例

| username: TiDBer_jYQINSnf

Hybrid deployment, bind cores according to NUMA.
3 machines, each machine divided into 4 nodes, each node deploys one component, multiple TiDB, PD, and TiKV. Please provide an example.

| username: TiDBer_jYQINSnf | Original post link

I have another question. TiUP itself is running a TiDB cluster. I want to scale out to a new topology file. How do I do that? I’m not familiar with TiUP commands.

For example, originally one machine was configured with one TiKV, and now I want to configure three TiKVs on one machine. The first TiKV’s port and directory remain unchanged, and the others are newly added.
Should I modify the topology.yaml and then execute some command to scale out directly? What’s the fastest way?

It’s okay to stop the service.

| username: lemonade010 | Original post link

# Global variables are applied to all deployments and used as the default value of

# the deployments if a specific deployment value is missing.

global:
user: “tidb”
ssh_port: 22
deploy_dir: “/tidb-deploy”
data_dir: “/tidb-data”

# Monitored variables are applied to all the machines.

monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115

server_configs:
tidb:
instance.tidb_slow_log_threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: [“host”]
tiflash:
logger.level: “info”

pd_servers:

  • host: 192.168.25.129

tidb_servers:

  • host: 192.168.25.129

tikv_servers:

  • host: 192.168.25.129
    port: 20160
    status_port: 20180
    config:
    server.labels: { host: “logic-host-1” }

  • host: 192.168.25.129
    port: 20161
    status_port: 20181
    config:
    server.labels: { host: “logic-host-2” }

  • host: 192.168.25.129
    port: 20162
    status_port: 20182
    config:
    server.labels: { host: “logic-host-3” }

tiflash_servers:

  • host: 192.168.25.129

monitoring_servers:

  • host: 192.168.25.129

grafana_servers:

  • host: 192.168.25.129
| username: lemonade010 | Original post link

The tiup cluster scale-out command is used for cluster expansion. The internal logic of expansion is similar to deployment. The tiup-cluster component will first establish an SSH connection to the new node, create the necessary directories on the target node, then execute the deployment and start the service. The expansion of PD nodes will join the cluster through the join method and update the configuration of services associated with PD; other services will directly start and join the cluster.

Syntax

tiup cluster scale-out <cluster-name> <topology.yaml> [flags]
  • <cluster-name> is the name of the cluster to operate on. If you forget the cluster name, you can check it through the cluster list.
  • <topology.yaml> is the pre-written expansion topology file, which should only contain the topology of the expansion part.

Options

-u, --user (string, default is the user executing the command)

Specifies the username to connect to the target machine. This user needs to have passwordless sudo root privileges on the target machine.

-i, --identity_file (string, default ~/.ssh/id_rsa)

Specifies the key file to connect to the target machine.

-p, --password

  • Use password login when connecting to the target machine, cannot be used simultaneously with -i/--identity_file.
  • Data type: BOOLEAN
  • This option is off by default, with a default value of false. Adding this option to the command and passing the value true or not passing a value will enable this feature.

–no-labels

  • When two or more TiKV are deployed on the same machine, there is a risk: since PD cannot perceive the cluster’s topology, it may schedule multiple replicas of a Region to different TiKV on the same physical machine, making this machine a single point of failure. To avoid this, users can use labels to specify that PD should not schedule the same Region to the same machine (refer to scheduling replicas by topology labels).
  • Data type: BOOLEAN
  • This option is off by default, with a default value of false. Adding this option to the command and passing the value true or not passing a value will enable this feature.

However, for a test environment, it may not matter if the replicas of a Region are scheduled to the same machine. In this case, you can use --no-labels to bypass the check.

–skip-create-user

When expanding the cluster, tiup-cluster will first check if the username specified in the topology file exists. If it does not exist, it will create one. Specifying the --skip-create-user option will skip the user creation step.

-h, --help

  • Outputs help information.
  • Data type: BOOLEAN
  • This option is off by default, with a default value of false. Adding this option to the command and passing the value true or not passing a value will enable this feature.
| username: Hacker007 | Original post link

Using TiUP to Scale Out and Scale In a TiDB Cluster | PingCAP Documentation Center, first scale out, then scale in the unnecessary parts. Scaling out and scaling in TiKV takes some time.