Debugging TiKV

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 关于tikv的调试

| username: liul

I would like to ask, I want to debug the TiKV code. When deploying, it usually starts with 3 TiKV nodes. Can I start with only one TiKV? Will there be any impact? I’m a newbie to TiDB, just getting started.

| username: tidb菜鸟一只 | Original post link

You need to set the replication to 1 to start only one TiKV node. By default, the replication is set to 3, and a single TiKV cannot provide services externally.

| username: liul | Original post link

How should I set it up? I compiled TiDB, TiKV, and PD myself. Since I mainly want to debug and get familiar with the code, I feel that multiple replicas are not necessary. One TiKV should theoretically be enough.

| username: 有猫万事足 | Original post link

Congratulations, you successfully compiled it.

replication.max-replicas is used to set the number of replicas
There is a setting in the PD configuration. It can be modified online or set in the file using tiup cluster edit-config.

| username: tidb菜鸟一只 | Original post link

Directly connect to TiDB
View configuration
SHOW config WHERE NAME LIKE ‘replication.max-replicas’
Modify configuration
SET config pd replication.max-replicas=1;

| username: liul | Original post link

I reinstalled a virtual machine and only managed to compile successfully in a clean environment. I guess the previous environment had too many compilation tasks, which caused some conflicts.

| username: 有猫万事足 | Original post link

Cool :+1:

| username: liul | Original post link

TiUP should be available only if it is installed; it seems that the compiled version does not have this.

| username: 有猫万事足 | Original post link

You can connect to TiDB and make online modifications as well.
Or installing a TiUP won’t take more than a few minutes, it’s very quick.
I read the debugging documentation, and it says that to debug TiKV separately, you can use only PD and TiKV. TiDB is not necessary.

| username: liul | Original post link

Creating tables and inserting data, can TiDB be unnecessary? I see that the documentation shows MySQL connecting to TiDB, right? Installing TiUP, then how to use the self-compiled version to start it?

| username: 有猫万事足 | Original post link

You can use tikv-client to connect to TiKV without TiDB, treating TiKV as a transactional key-value store.

The latter issue does seem a bit troublesome. It’s better to connect through TiDB. Alternatively, you could modify the PD configuration file.

[root@tidb1 conf]# pwd
/tidb-deploy/pd-2379/conf
[root@tidb1 conf]# cat pd.toml 
# WARNING: This file is auto-generated. Do not edit! All your modification will be overwritten!
# You can use 'tiup cluster edit-config' and 'tiup cluster reload' to update the configuration
# All configuration items you want to change can be added to:
# server_configs:
#   pd:
#     aa.b1.c3: value
#     aa.b2.c4: value

When deploying formally, this configuration file will be generated by tiup, and it is not recommended to modify it. However, if you are compiling and don’t have tiup, it might be faster to directly modify this file and restart PD.

| username: liul | Original post link

There is a config.toml file under pd, let me check the instructions on how to modify it. Does the TiDB configuration need to be modified?

| username: liul | Original post link

I see the config.toml, let’s see how to modify it.

I see that you can configure a topo.yaml during installation and configure it inside.

Is this the configuration in pd? Are the config.toml and the cluster configuration file the same?

| username: 有猫万事足 | Original post link

Yes, the settings related to PD in the topo.yaml file will eventually be written into the pd.toml file. Both TiDB and TiKV have corresponding configuration files, and the content is also from the topo.yaml file, which is automatically generated into the configuration files of each component.

For example, my settings in the topo.yaml file:

server_configs:
  tidb:
    experimental.allow-expression-index: true
    performance.enforce-mpp: true
  tikv: {}
  pd:
    tso-update-physical-interval: 1ms

The content of pd.toml is as follows:
tso-update-physical-interval = "1ms"
The content of tidb.toml is as follows:

[experimental]
allow-expression-index = true

[performance]
enforce-mpp = true

In other words, there are some format changes. You can refer to the standard configuration of these two files on GitHub to confirm the format.

| username: liul | Original post link

It seems that the compiled version doesn’t have these configurations. I found a method to start with an installation package. Let’s start it first and then see how to set it up.
Single-node deployment and startup of TiDB - Likecs.com

| username: 有猫万事足 | Original post link

As long as you can see the following parameter in the compiled pd-server:
--config string config file
you should be able to set the pd.toml file using this parameter.

Here is a complete startup parameter process for your reference:

bin/pd-server --name=pd-172.21.16.10-2379 --client-urls=http://0.0.0.0:2379 --advertise-client-urls=http://172.21.16.10:2379 --peer-urls=http://0.0.0.0:2380 --advertise-peer-urls=http://172.21.16.10:2380 --data-dir=/tidb-data/pd-2379 --initial-cluster=pd-172.21.16.10-2379=http://172.21.16.10:2380,pd-172.21.16.17-2379=http://172.21.16.17:2380,pd-172.21.0.143-2379=http://172.21.0.143:2380 --config=conf/pd.toml --log-file=/tidb-deploy/pd-2379/log/pd.log

| username: zhanggame1 | Original post link

When deploying, directly writing one TiKV is not a problem. Although the parameter is set to 3 replicas, it does not affect usage.

| username: liul | Original post link

I mainly debug TiKV, so using just one feels more convenient.

| username: liul | Original post link

Currently, it can start, and MySQL can connect. The specific steps are:

  1. Start the PD service
nohup ./pd-server --data-dir=pd --log-file=pd.log &

You can use

netstat -anp | grep 2379

to check if it has started. 2379 is the default port.

  1. Start TiKV (regarding the port, the default in the configuration file config.rs is 20160)
nohup ./tikv-server --pd-endpoints="127.0.0.1:2379" --addr="127.0.0.1:20160" --data-dir=tikv1 --log-file=tikv1.log &
  1. Start TiDB
nohup ./tidb-server --store=tikv --path="127.0.0.1:2379" --log-file=tidb.log &

Check if it has started

netstat -anp | grep 4000

Use MySQL to access

mysql -h 127.0.0.1 -P 4000 -u root