After modifying the TiKV parameters of the TiDB cluster deployed on k8s, the TiKV nodes did not change

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 修改k8s上部署的tidb集群tikv参数之后tikv节点没有修改

| username: tidb菜鸟一只

[TiDB Usage Environment] Production Environment
[TiDB Version]
[Reproduction Path]
Execute kubectl edit tc tidb -n tidb-admin to modify the cluster’s TiKV node parameters
[Encountered Issue: Symptoms and Impact]
Cluster parameters were not modified
[Resource Configuration]
K8S deployment with 3 TiDB, 3 TiKV, 3 PD, each node with 10 CPUs and 40GB memory
[Attachments: Screenshots/Logs/Monitoring]
image

| username: yiduoyunQ | Original post link

  1. Refer to Kubernetes 上的 TiDB 集群常见问题 | PingCAP 文档中心, the default value is RollingUpdate.
  2. Refer to Kubernetes 上的 TiDB 集群管理常用使用技巧 | PingCAP 文档中心, by default, you need to wait for the tikv leader to be evicted.
| username: tidb菜鸟一只 | Original post link

I am currently using RollingUpdate configuration, so I guess I have to wait for the tikv leader to be evicted. Actually, the tikv parameters I modified can be changed online, but the tikv parameters modified online via k8s cannot be persisted to the configuration file. Therefore, after I modified it online, I received a warning:

bad request TO http://tidb-tikv-0.tidb-tikv-peer.tidb-admin.svc:20180/config: failed TO UPDATE, error: Os
{ CODE: 30, kind: ReadOnlyFilesystem, message: “Read-only file system” }

However, when I checked the parameters, they had already been modified. Is the new value I modified already in use now?

Afterwards, I modified the same parameters by editing the configuration file. Should I wait for the tikv leader to be evicted and the node to restart, or will the new configuration take effect after the entire cluster restarts next time?

| username: yiduoyunQ | Original post link

You can observe the leader region and cluster impact on Grafana.

| username: tidb菜鸟一只 | Original post link

The parameters have taken effect, but the cluster has not been rolling restarted. I asked another colleague, and it seems that he performed a special operation on the scheduler or some pod of the TiDB cluster… This might be the cause, but the exact reason has not been determined yet.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.