TiKV Error: Failed to update max timestamp for region 732019009: Other("[components/pd_client/src/tso.rs:94]: TimestampRequest channel is closed")

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tikv报错ailed to update max timestamp for region 732019009: Other("[components/pd_client/src/tso.rs:94]: TimestampRequest channel is closed"

| username: Hacker_g3b9VBO9

| username: Hacker_g3b9VBO9 | Original post link

TiDB version 6.1.1

| username: Hacker_g3b9VBO9 | Original post link

There is also this error, unable to start.

| username: Lucien-卢西恩 | Original post link

Hello~ Have you checked if the PD status is normal? Both errors are pointing to a possible anomaly in the PD cluster. You can check the PD leader’s logs and network for any abnormalities and troubleshoot from there.

| username: Hacker_g3b9VBO9 | Original post link

How to use a command to designate one of them as the PD leader if there is an issue with PD?

| username: zhimadi | Original post link

tiup ctl:v6.0.0 pd -i -u http://10.0.0.80:2379
member leader transfer pd-10.0.0.81-2379

| username: Hacker_g3b9VBO9 | Original post link

| username: Hacker_g3b9VBO9 | Original post link

I later upgraded to version 6.2.0, but there are still issues.

| username: h5n1 | Original post link

Check the tiup cluster display, can you explain the upgrade process?

| username: xiaohetao | Original post link

Can we confirm which PD member node has the issue? Or is the entire PD having problems? If it’s a specific PD node with an issue, you can try taking the problematic node offline to see if the cluster can function normally.

| username: 胡杨树旁 | Original post link

Is the reason for this that the PD leader changed during the upgrade?

| username: Hacker_g3b9VBO9 | Original post link

There is a problem with the whole thing now.

| username: xiaohetao | Original post link

Check if there are any errors or warning messages in the PD logs.

Also, review the log information during the upgrade.

| username: Hacker_g3b9VBO9 | Original post link

After upgrading to v6.1.1 using tiup cluster upgrade tidb-kp-pms v6.1.1, there were issues with PD. I then used tiup cluster upgrade tidb-kp-pms v6.2.0 --offline to upgrade, but the issues still persist.

| username: Hacker_g3b9VBO9 | Original post link

It kept reporting errors, so I deleted it. Now, I only have the current logs.

| username: Hacker_g3b9VBO9 | Original post link

The disk is full.

| username: Hacker_g3b9VBO9 | Original post link

If there is an issue with PD, how can we designate one as the PD leader?

| username: 胡杨树旁 | Original post link

Have you installed the ctl component? pd-ctl should work.

| username: tidb狂热爱好者 | Original post link

Yes, his disk is full.

| username: 胡杨树旁 | Original post link

In this situation, there won’t be a split-brain issue, but a leader cannot be elected.