Errors in Logs After Upgrading TiKV & TiDB to Version 6.1

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tikv&tidb升级6.1以后出现一些错误日志

| username: a398058068

【TiDB Usage Environment】
Production
【TiDB Version】
6.1.0
【Encountered Issues】

  1. The tikv log continuously shows [“check leader failed”] [to_store=46] [error=“"[rpc failed] RpcFailure: 12-UNIMPLEMENTED"”]
  2. The tidb log continuously reports “Got too many pings from the client, closing the connection”
    【Reproduction Path】What operations were performed to encounter the issue
    【Problem Phenomenon and Impact】

tikv.log

tidb log output from the console

| username: Billmay表妹 | Original post link

What actions did you take before it appeared?

| username: a398058068 | Original post link

Upgrading from 5.4 to 6.1

tiup cluster upgrade tidb v6.1.0
| username: a398058068 | Original post link

Is there anyone who can help answer this question? There is too little information available. The only thing I could find is Got too many pings from the client, closing the connection. 和TiKV server timeout - TiDB 的问答社区, which doesn’t seem to be very helpful.

| username: jansu-dev | Original post link

  1. The error “check leader failed” is not highly related to the following error. If you are very concerned and it affects your business, you need to trace the logs in detail.
  2. The error “Got too many pings from the client” is caused by the internal gRPC health check of the TiDB product component. Apart from logging, it has no other business impact. The product behavior is that the TiDB logs will intermittently show this error. The trigger principle is that when gRPC PermitWithoutStream (a hard-coded limit, not user-adjustable) is set to true, the gRPC client will send health check requests to the gRPC server even without an active connection. If there are more than 2 pings within 2 hours, this log will be printed.
| username: a398058068 | Original post link

Yes, 1 is the TiKV log and 2 is the TiDB log. However, both of these errors appeared after the upgrade. Error 1 occurs almost every second, and Error 2 occurs approximately every 10 seconds. Although it does not affect the business, should such errors be reported? Should TiDB itself fix these issues through iterative versions, or should users troubleshoot and resolve these problems? It’s very strange to keep reporting these errors without addressing them, and it’s unclear whether some hidden issues might arise. There are basically zero answers online for these two errors.

| username: jansu-dev | Original post link

Yes, the issue has been identified within TiDB and will be fixed in a future version.

| username: a398058068 | Original post link

tikv.log (18.8 MB)

I’ll also upload the full log for issue 1. Currently, I haven’t found any relevant context.

| username: weixiaobing | Original post link

Is this a bug in version 6.1?

| username: 我是咖啡哥 | Original post link

After the upgrade, I also encountered a lot of these logs:
[advance.rs:296] [“check leader failed”] [to_store=111521721] [error=“"[rpc failed] RpcFailure: 12-UNIMPLEMENTED"”]

| username: Lily2025 | Original post link

Was this log generated during the upgrade process, or did it continue to be generated even after the upgrade was successful?

| username: 我是咖啡哥 | Original post link

It will keep running after success. Mine is the same.

| username: mayjiang0203 | Original post link

Info-type errors can be ignored. This error is actually encountered in the normal process. When the initiating RPC instance collects enough information, it will actively cancel, and this error will be reported upon cancellation. This issue has already been optimized on the master branch, filtering out cancel-type errors.

| username: cs58_dba | Original post link

According to the expert, it means to ignore it just like the backoff error.

| username: system | Original post link

This topic was automatically closed 1 minute after the last reply. No new replies are allowed.