Scaling TiKV

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tikv的扩容

| username: 快乐的非鱼

【TiDB Usage Environment】Production Environment / Testing / PoC
【TiDB Version】
【Reproduction Path】Previously, I was using one PD, one TiDB, and one KV for testing. Now the KV space is full, so I expanded with 2 more disks. The expansion was successful, but the full disk space did not decrease.
【Encountered Problem: Problem Phenomenon and Impact】
How to achieve TiKV space rebalancing? Is it automatically balanced by the system, or do I need to run some command to trigger the rebalancing?
【Resource Configuration】
【Attachments: Screenshots/Logs/Monitoring】

| username: xfworld | Original post link

Expanding KV nodes… Expanding the disk is not useful.

| username: 快乐的非鱼 | Original post link

It is expanding nodes, maybe I didn’t make it clear.

| username: caiyfc | Original post link

Check if these two monitors (overview->tikv) are level. If they are not level, it means migration is in progress. If they are level, it means migration is complete. Normally, the numbers of leaders and regions are balanced.

| username: Jiawei | Original post link

Follow the same troubleshooting approach as with hotspots and perform the balance operation.

| username: 大鱼海棠 | Original post link

One of the KV disks is a bit small. Add another KV and then scale in the KV of the PD node.

| username: xingzhenxiang | Original post link

How is it done?

| username: tidb菜鸟一只 | Original post link

Did you originally set the replica to 1? Otherwise, a single TiKV node wouldn’t be able to run. If that’s the case, you can directly scale down the old node now. Balancing multiple nodes on the same machine is very difficult to achieve evenly among the three.

| username: 快乐的非鱼 | Original post link

For a single node, different directories will suffice.

| username: knull | Original post link

You can first check the number of replicas (for example, show table test.t regions).
Additionally, you can look at the scheduling situation and check the latest PD monitoring page.