Issues with TiKV Data Storage

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: TiKV数据存储问题

| username: TiDBer_QHSxuEa1

Still learning through videos. May I ask everyone, TiDB defaults to three replicas.
When I was testing the deployment of the cluster, I only started one TiKV instance. If I later expand the TiKV nodes, will the original data be copied to the newly expanded nodes? If it doesn’t automatically copy to the new nodes, is there any way to make it copy? If it does, is there any way to prevent it from copying?
Thanks to everyone willing to help.

| username: Kongdom | Original post link

It will be copied to the newly expanded node, but there is a way to prevent it from being copied.

This command is to evict the leader replica, so that the new machine does not have a leader replica.

scheduler add evict-leader-scheduler 1

Scheduling through labels should also work, but I haven’t used it.

| username: xfworld | Original post link

Why can’t it replicate after adding nodes and setting three replicas? In what scenario?

Three replicas are meant to ensure high availability of data. If replication is not allowed, why not set it to one replica? Isn’t that contradictory?

| username: zhanggame1 | Original post link

The original data will be replicated to the newly added nodes, with three replicas by default.

| username: redgame | Original post link

It will be copied to the new node.

| username: 像风一样的男子 | Original post link

It will be copied to the new node.

| username: TiDBer_jYQINSnf | Original post link

If there is only one TiKV, the replica count is 1. After adding more TiKV, it will automatically schedule the data. If you want to stop the scheduling, just remove all the schedulers.

| username: TiDBer_QHSxuEa1 | Original post link

Why didn’t the new node automatically replicate data after I expanded? 192.168.10.103 is the newly added node.


image

| username: TiDBer_QHSxuEa1 | Original post link

192.168.10.103 is the newly added node.
Why didn’t the new node automatically replicate data after expansion?

| username: TiDBer_QHSxuEa1 | Original post link

Why didn’t the new node automatically replicate data after I expanded? 192.168.10.103 is the newly added node.


image

| username: TiDBer_QHSxuEa1 | Original post link

Why didn’t the new node automatically replicate data after I scaled out? 192.168.10.103 is the newly added node.

image

image

| username: TiDBer_QHSxuEa1 | Original post link

Why didn’t the new node automatically replicate data after I expanded? 192.168.10.103 is the newly added node.

image

image

| username: cassblanca | Original post link

It will automatically rebalance data to the new node.

| username: TiDBer_jYQINSnf | Original post link

Execute pd-ctl store to check the status of the store.

| username: TiDBer_QHSxuEa1 | Original post link

The images you provided are not visible. Please provide the text content you need translated.

| username: TiDBer_jYQINSnf | Original post link

Your 103 is store12001, and it already has data. This is not the number of regions, which is 289, and the number of leaders is 145.

| username: TiDBer_QHSxuEa1 | Original post link

Why can’t I see the data files in the root directory of data storage? The db folder is always stored in hadoop102.

| username: 像风一样的男子 | Original post link

The data storage addresses configured for your 2 TiKV nodes are different. Are you sure you didn’t misread the directories?

| username: TiDBer_QHSxuEa1 | Original post link

I found it, indeed, I looked at the wrong directory. Thanks :handshake:

| username: TiDBer_QHSxuEa1 | Original post link

I looked at the wrong directory, sorry about that. Thanks for the explanation :+1: