Issues with BR Backup and Restore Commands

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: BR备份与恢复指令问题

| username: TiDB超级萌新

The storage part of this command can replace ‘s3’ with ‘local’.

So, when restoring, you will also write such a long address. If I am on another Linux machine that can ping through, how should I write it?

Is it using a command like this:

scp* .

and then use ‘local’ on the new Linux machine?

Can ‘s3’ here only be replaced with ‘local’?

| username: zhanggame1 | Original post link

There are two local options:

  1. Use NFS to back up everything to a shared path.
  2. Use SCP to transfer a copy of each node’s local backup to other nodes.
| username: changpeng75 | Original post link

You can refer to the documentation

| username: TiDB超级萌新 | Original post link

It’s not a single cluster. I started a new cluster separately.

| username: 春风十里 | Original post link

S3 is object storage, requiring a dedicated server to provide S3 services. Restoring on another machine is the same; as long as the S3 address is accessible over the network and the account credentials are correct, it can be accessed. The restore process also writes to the same address.
Local refers to the local directory of TiKV, which can also be an NFS directory. If using a local directory, all nodes must have the same directory, and the backup content from each node must be copied to other nodes.
S3 itself is not local.

TiDB Backup and Restore Overview | PingCAP Documentation Center

Choosing Backup Storage

Amazon S3, Google Cloud Storage (GCS), and Azure Blob Storage are the recommended storage systems. Using these systems, you don’t need to worry about backup capacity, bandwidth planning, etc.

If the TiDB cluster is deployed in a self-hosted data center, the following methods are recommended:

  • Set up MinIO as the backup storage system and use the S3 protocol to back up data to MinIO.
  • Mount an NFS (such as NAS) drive to the br tool and all TiKV instances, and use the POSIX file system interface to write backup data to the corresponding NFS directory.


If NFS is not mounted to the br tool or TiKV nodes, or if remote storage supporting S3, GCS, or Azure Blob Storage protocols is used, the br tool will generate backup data on each TiKV node. Note that this is not the recommended way to use the br tool because the backup data will be scattered across the local file systems of various nodes. Aggregating this backup data may cause data redundancy and operational difficulties, and attempting to restore without aggregating this data may result in SST file not found errors.

| username: TiDB超级萌新 | Original post link

Okay, understood, thank you. So what should be written here if it’s NFS?

| username: 春风十里 | Original post link

NFS is equivalent to local.

| username: 春风十里 | Original post link

NFS is a directory on the operating system, and for applications, it can be understood as a directory just like the operating system. NFS also requires a server-side, but this is relatively simple and easier to handle than S3. You can search online for many introductions. However, NFS lacks stability in high concurrency scenarios and may sometimes hang, which is an old issue, though it is rarely encountered under normal circumstances. You can refer to the following example:

[root@localhost ~]# tiup br backup full --pd "" --storage "local:///tmp/backup4" --ratelimit 120 --log-file backupfull3.log
tiup is checking updates for component br ...
Starting component `br`: /root/.tiup/components/br/v7.5.0/br backup full --pd --storage local:///tmp/backup4 --ratelimit 120 --log-file backupfull3.log
Detail BR log in backupfull3.log 
[2024/01/14 22:22:53.968 +08:00] [WARN] [backup.go:312] ["setting `--ratelimit` and `--concurrency` at the same time, ignoring `--concurrency`: `--ratelimit` forces sequential (i.e. concurrency = 1) backup"] [ratelimit=125.8MB/s] [concurrency-specified=4]
Full Backup <--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Checksum <-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2024/01/14 22:29:26.162 +08:00] [INFO] [collector.go:77] ["Full Backup success summary"] [total-ranges=563] [ranges-succeed=563] [ranges-failed=0] [backup-checksum=37.87673844s] [backup-fast-checksum=596.027937ms] [backup-total-ranges=646] [total-take=6m32.19307978s] [
BackupTS=447201206333341701] [total-kv=23077589] [total-kv-size=3.309GB] [average-speed=8.437MB/s] [backup-data-size(after-compressed)=1.165GB] [Size=1164606935]
[root@localhost ~]# 
| username: TiDB超级萌新 | Original post link

Okay, thank you!!

| username: TiDB超级萌新 | Original post link

Are there any other protocols supported besides S3? If S3 is used, does it mean that there is no need to aggregate this data?

| username: 春风十里 | Original post link

Amazon S3, Google Cloud Storage (GCS), and Azure Blob Storage are recommended storage system choices. By using these systems, you don’t need to worry about backup capacity, backup bandwidth planning, etc.

I haven’t used Google Cloud Storage (GCS) or Azure Blob Storage, but I found out that they all follow the S3 protocol, which is object storage.

In fact, there are many self-built object storage solutions, and since they all originated from Amazon S3, I understand (unofficially) that any object storage supporting the S3 protocol should work. Other protocols include NFS, but I haven’t seen any other introductions.

| username: TiDB超级萌新 | Original post link

Using these means you don’t need to integrate them as if they were local, right?

| username: oceanzhang | Original post link

Is NFS just the path to mount?

| username: zhang_2023 | Original post link

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.