Exporting Data Directly from TiKV Node Data Files Without Starting the Service to Handle Extreme Situations

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 不启动服务-直接从节点TiKV数据文件中导出数据应对极端情况

| username: goalzz85

Previously, I used a three-node database cluster. The operations team said there were rolling hard disk data backups, so I only did daily export backups. As a result, the hard disk failed directly due to a power outage, and then I found that the backup data was directly a hard disk image, and it was not backed up at the same time point. The data on the three nodes’ hard disks was completely different, making it impossible to start.

The client wanted the latest data as much as possible, so I had to study the official source code interpretation and storage instructions, and directly read the data from the TiKV underlying RocksDB data files. (The data files of a single TiKV node were fine, but the data from different nodes did not match at all. TiDB reported an error when starting, possibly due to some Region merge operations.)

Recently, I practiced Rust and wrote a tool to directly export the database list, database table list, and corresponding table data from RocksDB. For those with similar scenarios, this tool might come in handy someday, so you won’t be as stressed as I was!

Here is the code repository. I compiled a version with CentOS7+GCC7; for other versions, please compile it yourself.
https://github.com/goalzz85/tidb-exporter

The principle is roughly as shown in the picture below. It directly exports data from RocksDB. Since it is a disaster recovery export, it does not consider whether the Region is a leader. It exports all the data inside. If it is a three-node setup, theoretically, it exports the full data. I have tested it with tables of up to 1 billion records.

| username: 我是咖啡哥 | Original post link

So impressive :100:

| username: goalzz85 | Original post link

:blush: Also referenced a lot of official libraries, translated some Go language code structures over, and abandoned some storage structures of the lower versions. The data reading method is based on my own understanding, roughly using the same primary key, with the latest entry being the main one. If the latest entry is a delete, it is considered deleted. I just verified that the data is okay, but I can’t ensure that all conditional branches are covered. I think it’s good enough to be able to retrieve the data, haha.

| username: 有猫万事足 | Original post link

:+1::+1::+1:

| username: 像风一样的男子 | Original post link

:+1::+1::+1:
Hope I won’t need to use it.

| username: 小龙虾爱大龙虾 | Original post link

Got it! Please provide the Chinese text you need translated.

| username: h5n1 | Original post link

Awesome, this is exactly the kind of tool the official team needs.

| username: zhanggame1 | Original post link

Awesome.
However, database backup using disk mirroring is called cold backup, which can only be done with the database shut down.

| username: goalzz85 | Original post link

Likewise, I hope no one needs to use it. If you do, it means you’re in a really tough spot.

| username: goalzz85 | Original post link

Yes, I understand that it is a backup mechanism at the cloud infrastructure level, which does not affect the normal operation of the system. However, it does not back up all at once; instead, it queues and backs up one node at a time, so the data files on each node are completely different.

| username: tidb菜鸟一只 | Original post link

This is amazing…

| username: 魂之挽歌 | Original post link

:+1: :+1: :+1:

| username: zhanggame1 | Original post link

So only cold backup is possible; stopping the database for backup is the only way to ensure backup consistency.

| username: 大飞哥online | Original post link

Thumbs up

| username: 小鱼吃大鱼 | Original post link

How to use it? The downloaded tar package does not contain an executable file.

| username: goalzz85 | Original post link

There is a release on GitHub, download the following:
tidb-exporter-v740-x86_64-unknown-linux-gnu.tar.gz

Currently, the Linux executable file is only compiled on CentOS7 + GCC7. If there are dependency issues, you will need to compile it yourself.

Use the --help command to check the usage. The main requirements are the storage file path of TiKV, the database, the table to be exported, and the export path.

| username: 小鱼吃大鱼 | Original post link

Okay, there’s a problem. I can find out which databases are available, but when I try to check the tables in a specific database, it throws an error: thread ‘main’ panicked at ‘called Result::unwrap() on an Err value: CorruptedData(“parse table info error”)’, src/main.rs:125:69
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

If a physical file or a certain region fails, does that mean it can’t be queried anymore?

| username: goalzz85 | Original post link

First, let me confirm your TiDB version. If the files are directly corrupted, it won’t work because the data itself is already messed up. This software is currently used in scenarios where the server has metadata anomalies or the service cannot start and work normally. It can copy data from the underlying files, but the prerequisite is that the basic files are intact.

| username: 小鱼吃大鱼 | Original post link

v4.0.4, there is a damaged region.

| username: goalzz85 | Original post link

I defaulted to compiling based on version 7.1. I have tested versions 5.x and 7.4 myself. I’m not sure if your issue is file corruption or just region corruption. If the data is not sensitive, you can upload it or send it to me via email, and I will take some time to check whether it is supported.