Error in Scaling PD for TiDB Deployed on k8s

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: k8s 部署的tidb扩容pd报错

| username: TiDBer_m3AMc0Q5

[Test Environment] Test Environment
[TiDB Version]
[Reproduction Path]
By executing the command: kubectl patch -n srit-tidb tc srit-tidb-cluster --type merge --patch ‘{“spec”:{“pd”:{“replicas”:4}}}’
Increase the PD from the original three nodes to four nodes
[Encountered Problem]: The new node cannot join the cluster and keeps reporting errors
[2022/12/26 07:19:44.903 +00:00] [INFO] [join.go:218] [“failed to open directory, maybe start for the first time”] [error=“open /var/lib/pd/member: no such file or directory”]
[2022/12/26 07:19:46.353 +00:00] [FATAL] [main.go:91] [“join meet error”] [error=“there is a member that has not joined successfully”] [stack=“main.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/cmd/pd-server/main.go:91\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250”]
[Resource Configuration]
[TiDB Operator Version]:
[K8s Version]: 1.25.4
[Attachments: Screenshots/Logs/Monitoring]

| username: xfworld | Original post link

Whether it’s PD or TiKV, an odd number of nodes is recommended…

The log contains the error: “open /var/lib/pd/member: no such file or directory”

Path not found? Could it be a configuration issue?

| username: tidb菜鸟一只 | Original post link

Is it useful to have so many PDs?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.