TiDB Error: Encountered Error

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb error报错 encountered error

| username: wiki-qi

【TiDB Usage Environment】 Production
【TiDB Version】 5.4
【Encountered Problem】
TiDB reported an error as follows:
[terror.go:307] [“encountered error”] [error=“connection was bad”] [stack=“github.com/pingcap/tidb/parser/terror.Log
\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/parser/terror/terror.go:307
github.com/pingcap/tidb/server.(*clientConn).Run
\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/conn.go:1125
github.com/pingcap/tidb/server.(*Server).onConn
\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/server.go:548”]

【Attachments】

Please provide the version information of each component, such as cdc/tikv, which can be obtained by executing cdc version/tikv-server --version.

| username: 啦啦啦啦啦 | Original post link

You need to check the specific error information on the client. Refer to this link:

| username: wiki-qi | Original post link

I have discovered another issue. My machine has 2 TiDB services, and I set up load balancing in front of them. However, I noticed that these two services are restarting. The specific error log is as follows: "[deadlock.rs:773] [“leader client failed”] [err=“Grpc(RpcFinished(Some(RpcStatus { code: 1-CANCELLED, message: "Cancelled", details: })))”]

| username: wiki-qi | Original post link

Under what circumstances will tidb-server OOM occur? Or how to prevent OOM?

| username: wiki-qi | Original post link

Out of memory: Kill process 7350 (tidb-server) score 652 or sacrifice child

| username: jansu-dev | Original post link

  1. “connection was bad” indicates that the connection was interrupted, and from server.(*clientConn)., it shows that the connection was initiated by the application;
  2. There are various scenarios where tidb-server might experience an OOM (Out of Memory) issue, making it difficult to provide a direct conclusion. The root cause needs to be identified and resolved. The most common reason is that SQL computations require a large amount of memory, causing TiDB to run out of memory. You can start by examining slow SQL queries. If the root cause cannot be identified, you can collect a clinic report and post it here for everyone to discuss and analyze.