Application environment:
Test
TiDB version:
V7.1
Reproduction method:
Delete one TiKV
Problem:
If I delete one TiKV node, what will TiDB cluster perform, the data in the deleting TiKV node is moved to the other TiKV or replicate the data to the other TiKV from the leader region?
Resource allocation:
Attachment:
When a TiKV node is deleted from a TiDB cluster, the cluster takes several actions to ensure data integrity and availability:
-
Data Redistribution and Replication: TiDB uses the Raft consensus algorithm to manage data replication across multiple nodes. When a TiKV node is deleted, the data that was stored on that node is redistributed to other TiKV nodes in the cluster. This redistribution is managed by the Placement Driver (PD), which ensures that the data is evenly distributed across the remaining nodes.
-
Leader and Replica Management: Each piece of data in TiDB is part of a Region, and each Region has multiple replicas. When a TiKV node is deleted, the PD component will reassign the leader and replicas of the affected Regions to other nodes. This process involves electing new leaders for the Regions that were affected by the node deletion.
-
Automatic Recovery: TiDB is designed to handle node failures and deletions automatically. The system will automatically detect the loss of a node and initiate the process of redistributing data and reassigning leaders to maintain high availability and data consistency.
-
Ensuring Data Consistency: The Raft consensus algorithm ensures that data remains consistent across all replicas. Even if a node is deleted, the remaining replicas will continue to provide consistent data, and new replicas will be created as needed to maintain the desired level of redundancy.
Overall, TiDB’s architecture is designed to handle the deletion of a TiKV node gracefully, ensuring that data is redistributed and replicated to maintain availability and consistency across the cluster.
: TiDB Architecture Overview | PingCAP Docs
: Region and Raft Consensus Algorithm | PingCAP Docs
: TiDB Cluster Management | PingCAP Docs
: TiDB High Availability FAQ | PingCAP Docs
: TiDB Strong Consistency | PingCAP Docs