How to Merge and Restore Periodic Log Backup Files

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 定期拷贝日志备份文件如何合并恢复

| username: TiDBer_BwNZ5U9X

As mentioned, I am working on a complete backup and log backup solution for TiDB.
After using the tiup br log start command to specify a shared directory to start log backup, I can see various files generated in the directory.

I want to know if it is possible to implement an operation,
such as once a day, packaging the log backup files generated that day into a backup set for storage, and then clearing the local log files.
Then, during recovery, first restore the full backup, then merge all the packaged backup sets into one directory, and perform the PIRT recovery operation.

If possible, how should it be done, and how to merge the packaged files?

| username: 有猫万事足 | Original post link

I don’t think file merging can solve the problem.

Because the directory seems to contain a bunch of SST files, which can be understood as the original information of the region, and it includes all the MVCC information from the start of the backup to now to achieve PITR. This is probably not something that file merging can solve.

Because some SST files may not be fully written, restoring SST files by day will definitely have overlaps, and if the order of operations by day is wrong, the restored logs will be unusable.

If you have to upload once a day, I feel it’s safer to stop the backup with tiup br log stop every day, then back it up, restart the task according to the timestamp, and change the saved path. This might work.

I haven’t tried it either. You can give it a try.
It’s best to use S3, which is very hassle-free. The reason you’re going through all this trouble is because you’re using a shared directory.

| username: zhang_2023 | Original post link

It’s much more convenient to set up shared storage or a shared directory.

| username: zhanggame1 | Original post link

How to merge the packaged files?
Just tar the package, no need to merge, right?

| username: TiDBer_BwNZ5U9X | Original post link

How do you recover the logs? Do you recover them one package at a time?

| username: zhanggame1 | Original post link

Try the TiDB recovery process and see if there are any issues. You need a full backup and the logs after the full backup.

| username: 随缘天空 | Original post link

Hello, I have a question to confirm. If we perform a full backup plus log backup using a shared directory, and since the log backup is done once a day, during restoration, we first restore the full backup. However, there are multiple incremental backups. Do we need to restore multiple times or can it be done in one go?

| username: 有猫万事足 | Original post link

It’s hard to answer without actually trying it. Personally, I feel that restoring in this way is very difficult and unreliable. Let’s see if any other experts have good practices.

| username: TIDB-Learner | Original post link

Has TiDB implanted a chip in my brain? Recently, I’ve been thinking about disaster recovery issues and found several articles on the forum homepage. :joy:

| username: TiDBer_BwNZ5U9X | Original post link

I tried it yesterday. You can stop the log backup first, then copy all the logs and package them.

When restoring, you can start from the full backup set and restore one by one, as long as the ts is continuous.

However, the recoverable time will have a few minutes difference from the backup time. I wonder if there is a forced checkpoint operation, and whether it is possible to force log generation to reduce the time difference.

| username: 随缘天空 | Original post link

Are you saying that during recovery, you first perform a full restore, and then restore the incremental data in batches according to the chronological order? For example, if the full backup is on 2024-05-01, then you have incremental backups on the 2nd, 3rd, 4th, and so on. During recovery, you first restore all data up to the 1st, and then perform the restore for the 2nd, 3rd, and so on in sequence. Is that correct?

| username: 有猫万事足 | Original post link

Good practice.

There is no forced checkpoint operation.

However, you can use the br log metadata command to ensure that the most recent recoverable time point includes the previous full day. But if you want to precisely control it down to the second level, it might indeed be a bit difficult.

| username: TiDBer_BwNZ5U9X | Original post link

Yes, as long as the backup set TS is guaranteed to be less than the end TS of the previous backup set, it can be restored in order.

| username: 随缘天空 | Original post link

If that’s the case, when there are too many incremental data directories, the recovery process feels quite cumbersome. It’s not feasible to restore them day by day, right?

| username: TiDBer_BwNZ5U9X | Original post link

The date of the most recent full backup should only be restored day by day. The backup source data information recorded in the log backup directory cannot be merged.

| username: 随缘天空 | Original post link

Alright, this is indeed a bit inconvenient. We can only try to increase the frequency of full backups to reduce the number of incremental directories.

| username: zhh_912 | Original post link

This type of recovery is actually not recommended.

| username: yytest | Original post link

You can try using the operating system’s rsync tool; it’s very useful for remote transfers.