Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: tiflash节点无法下线

TiDB version 6.5.0
Everyone, is it necessary to have at least one TiFlash node to be able to decommission another one?
Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.Original topic: tiflash节点无法下线
TiDB version 6.5.0
When using tiup to decommission, it reports that a store is being used by tiflash. After deleting it, it still doesn’t work. What should I do?
Dear experts, is it because the initial configuration file includes TiFlash that scaling down is not allowed? Only individual TiFlash nodes can be shut down?
You are referring to the topology.yaml file when creating a cluster, right? This file is only used when creating the cluster; it will not be read again during scaling operations.
However, when I was creating the cluster, I defined TiFlash in the topology.yaml file. I’m not sure if this affects scaling down? If it doesn’t affect, it seems that scaling down is not possible? If it does affect, I noticed that once TiFlash in the topology.yaml file takes effect, it cannot be changed.
If you really can’t find the reason and are very eager to remove TiFlash, you can stop TiFlash first, use tiup cluster scale-in --force
to force it offline, and then use tiup cluster prune
to clean up the remnants.
It is necessary to ensure that the number of replicas of the table in the TiFlash to be taken offline is 0.
How did you start this cluster? Did you set the max-replica parameter to 1? Is there only one TiKV node?
So, in the case of a forced offline, what should be done if there is a region with the ID 88?
Does this mean that someone is using TiFlash? So it can’t be completely taken offline.
It is a region using TiFlash, but I don’t have the number of replicas… And I can’t delete the store either…
This 88 is not the region id, but the store id. You can directly check store 88 in pdctl, it should be the TiFlash node.