Failed to deploy TiDB on k8s, no error reported

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: k8s部署tidb失败,没有报错

| username: chenhanneu

[TiDB Usage Environment] Production Environment / Testing / Poc
[TiDB Version]
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration]
[TiDB Operator Version]: v1.5.2
[K8s Version]: v1.29.2
[Attachments: Screenshots / Logs / Monitoring]
tidb-operator tidb-admin 1 2024-03-13 13:36:47.522594135 +0800 CST deployed tidb-operator-v1.5.2 v1.5.2
tidb-operator pod logs:
E0313 06:17:24.355669 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)
E0313 06:18:09.206798 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)
E0313 06:19:03.449916 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)
E0313 06:19:52.099712 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)

kubectl apply -f tidb-test.yaml
tidbcluster.pingcap.com/tidb-test created

But the pod was not created. No error messages were seen either.

Where should I check to find out what went wrong?

| username: MrSylar | Original post link

How about this return?

| username: chenhanneu | Original post link

The image shows

| username: TiDBer_jYQINSnf | Original post link

Use kubectl get tc -n ns -o yaml to see how your tc is written.

| username: TiDBer_5cwU0ltE | Original post link

Did you follow the documentation step by step? Generally, following the documentation should go smoothly.

| username: redgame | Original post link

The prompt indicates that the Operator is not functioning properly, which in turn affects the deployment of tidb-test.

| username: 连连看db | Original post link

It is possible that the versions are incompatible.

| username: TiDBer_aaO4sU46 | Original post link

If there is no clear error message, it’s not easy to handle.

| username: chenhanneu | Original post link

Copied the minimal resource template from the documentation and changed the storageClassName to nfs-client.
(tidb-operator/examples/advanced/tidb-cluster.yaml at v1.5.2 · pingcap/tidb-operator · GitHub)
The detailed phenomenon remains the same after applying.

| username: TiDBer_jYQINSnf | Original post link

Your storage is all nfs-client, so is there a provisioner?
Run kubectl get pvc -n xxx to check if the PVCs have been created and if the corresponding PVs have been created.

| username: chenhanneu | Original post link

These two are for testing, and the PVC for TiDB has not been created.

| username: TiDBer_jYQINSnf | Original post link

The PVC hasn’t come out.
Log the operator’s logs, the control manager one, and filter this namespace. Post a segment for us to take a look.

| username: chenhanneu | Original post link

There are no matching logs in both tidb-controller-manager and kube-controller-manager-node logs.
After multiple deletions and reapplying the YAML, there are still no changes in the kube-controller-manager-node logs.

| username: chenhanneu | Original post link

kubectl get crd
Indeed, there is no TidbDashboard.

Re-downloaded version 1.5.2
kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml
kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/backups.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/backupschedules.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/dmclusters.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/restores.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/tidbdashboards.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/tidbinitializers.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/tidbmonitors.pingcap.com configured
customresourcedefinition.apiextensions.k8s.io/tidbngmonitorings.pingcap.com configured


The CustomResourceDefinition “tidbclusters.pingcap.com” is invalid: metadata.annotations: Too long: must have at most 262144 bytes


Dashboard is there, but tidbclusters is not.

Continue to download the dev version
kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml
NAME CREATED AT
backups.pingcap.com 2024-03-13T11:23:54Z
backupschedules.pingcap.com 2024-03-13T11:23:57Z
dmclusters.pingcap.com 2024-03-13T11:23:57Z
restores.pingcap.com 2024-03-13T11:23:57Z
tidbclusterautoscalers.pingcap.com 2024-03-13T11:23:57Z
tidbclusters.pingcap.com 2024-03-13T11:24:14Z
tidbdashboards.pingcap.com 2024-03-13T11:24:21Z
tidbinitializers.pingcap.com 2024-03-13T11:24:21Z
tidbmonitors.pingcap.com 2024-03-13T11:24:22Z
tidbngmonitorings.pingcap.com 2024-03-13T11:24:23Z
No errors, there are 10 names.

Applied yaml, pods were created normally.

| username: chenhanneu | Original post link

Thank you, everyone.

| username: TiDBer_jYQINSnf | Original post link

This CRD should be newly added, it’s not in our version. I just saw the error and habitually ignored it.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.