How to Scale Down Specific Nodes in TiDB Deployed on Kubernetes

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: k8s部署的Tidb如何缩容指定节点

| username: TiDBer_hf1k4gsi

[Test Environment] TiDB
[TiDB Version] 6.5.2
[Operator Version] 1.4.4
[Reproduction Path]
Currently, I have 6 kv nodes. Due to resource constraints, I need to scale down.
The node to be deleted is tikv-3.
According to the documentation, I have already canceled the scheduling, kv leader, and region for that node.
But if I delete this kv pod, k8s will try to restart tikv-3.
What I can imagine is to also scale down kv-5 and let kv-3 schedule there. However, this will cause unnecessary data migration.
[Problem Encountered: Symptoms and Impact]
I hope there is an elegant way to solve the problem of deleting a specific node.

[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]


| username: tidb菜鸟一只 | Original post link

You can only specify the number of nodes. For example, if the original replicas is 4 and you change it to 3, it will scale down to 3 nodes. You should not be able to specify which particular node to scale down.

| username: TiDBer_hf1k4gsi | Original post link

You can only shrink the last tikv-5.

| username: yiduoyunQ | Original post link

Refer to the documentation 增强型 StatefulSet 控制器 | PingCAP 文档中心

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.