TiFlash Node Cannot Be Decommissioned

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tiflash节点无法下线

| username: 烂番薯0

TiDB version 6.5.0




Everyone, is it necessary to have at least one TiFlash node to be able to decommission another one?

| username: 烂番薯0 | Original post link

When using tiup to decommission, it reports that a store is being used by tiflash. After deleting it, it still doesn’t work. What should I do?

| username: 小龙虾爱大龙虾 | Original post link

Do you have tables with TiFlash replicas? :thinking:

| username: 烂番薯0 | Original post link

No, I just checked.

| username: tidb狂热爱好者 | Original post link

Which version?

| username: 烂番薯0 | Original post link

Version 6.5.0

| username: TiDB_C罗 | Original post link

What is in the debug file?

| username: 烂番薯0 | Original post link

| username: 烂番薯0 | Original post link

Dear experts, is it because the initial configuration file includes TiFlash that scaling down is not allowed? Only individual TiFlash nodes can be shut down?

| username: TiDB_C罗 | Original post link

You are referring to the topology.yaml file when creating a cluster, right? This file is only used when creating the cluster; it will not be read again during scaling operations.

| username: 烂番薯0 | Original post link

However, when I was creating the cluster, I defined TiFlash in the topology.yaml file. I’m not sure if this affects scaling down? If it doesn’t affect, it seems that scaling down is not possible? If it does affect, I noticed that once TiFlash in the topology.yaml file takes effect, it cannot be changed.

| username: 连连看db | Original post link

If you really can’t find the reason and are very eager to remove TiFlash, you can stop TiFlash first, use tiup cluster scale-in --force to force it offline, and then use tiup cluster prune to clean up the remnants.

| username: dba远航 | Original post link

It is necessary to ensure that the number of replicas of the table in the TiFlash to be taken offline is 0.

| username: tidb菜鸟一只 | Original post link

How did you start this cluster? Did you set the max-replica parameter to 1? Is there only one TiKV node?

| username: 烂番薯0 | Original post link

The number of replicas is 0, there are no replicas.

| username: 烂番薯0 | Original post link

This value is not set, it should be the default.

| username: 烂番薯0 | Original post link

So, in the case of a forced offline, what should be done if there is a region with the ID 88?

| username: wangccsy | Original post link

Does this mean that someone is using TiFlash? So it can’t be completely taken offline.

| username: 烂番薯0 | Original post link

It is a region using TiFlash, but I don’t have the number of replicas… And I can’t delete the store either…

| username: tidb菜鸟一只 | Original post link

This 88 is not the region id, but the store id. You can directly check store 88 in pdctl, it should be the TiFlash node.