After deleting store 17849034, the store still exists

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: store delete 17849034 删除后store 还是存在

| username: rw12306

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version] v6.1.2
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Phenomenon and Impact]
After executing the expansion, it prompts that the store exists. Following the official website’s instructions, using the store delete 17849034 command to delete it still results in its existence, causing the node to fail to start.

image

[Resource Configuration] Enter TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots/Logs/Monitoring]
Emphasized Text

| username: tidb菜鸟一只 | Original post link

There are still regions that haven’t finished migrating. Just wait for offline to become tombstone.

| username: rw12306 | Original post link

I am using new machines. All previous data has been deleted because there was an issue with the disk before.
Why are the regions getting larger? Shouldn’t they be getting smaller with migration?

| username: tidb菜鸟一只 | Original post link

Check region_count

| username: rw12306 | Original post link

This is also increasing. It was 6666 before.

| username: tidb菜鸟一只 | Original post link

How did you operate it, expand first and then shrink? Has the expansion been completed? Did you use store delete to take it offline, not tiup?

| username: rw12306 | Original post link

First, I scaled down, and after seeing the success prompt, I scaled up. However, I found that it couldn’t start during the scaling up process. Then I saw on the official website that I should use store delete, but it couldn’t be deleted. Now it still can’t start.

| username: caiyfc | Original post link

Deleting previous data is not a normal way to scale down, right? This situation usually involves an abnormal deletion of TiKV, but PD still has the store data. As an emergency measure, you can re-scale by adding a TiKV and changing all the ports, which should allow it to start. Then, you can deal with this store. If this store is not useful, first delete the region, and then delete the store.

| username: tidb菜鸟一只 | Original post link

Isn’t it usually expansion before contraction? How many TiKV nodes do you have?

| username: rw12306 | Original post link

3 TiKV nodes.

| username: TIDB-Learner | Original post link

What do you mean?

| username: tidb菜鸟一只 | Original post link

You have three TiKV nodes, how can you scale down first and then scale up? You must scale up first and then scale down, because when you scale down, there is no place for the regions on the TiKV node to migrate to… Use tiup to scale up a TiKV node first… Once there is a place for the regions to migrate, you can naturally scale down.

| username: TiDBer_rvITcue9 | Original post link

Correct.

| username: DBAER | Original post link

The person above is right.

| username: 这里介绍不了我 | Original post link

It looks like a repair is needed. Online Unsafe Recovery Documentation | PingCAP Documentation Center, take a look at this.

| username: 小龙虾爱大龙虾 | Original post link

With 3 replicas, you need at least 3 TiKV nodes. It’s not possible to scale down normally.

| username: zhanggame1 | Original post link

I remember that scaling down 3 replicas of TiKV will get stuck. How about adjusting it to one replica before scaling down?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.