In Production, How to Set the Backup Directory for BR?

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 生产中,如何设定BR的备份目录?

| username: TiDBer_JUi6UvZm

When using BR for backup, the BR backup directory on each server stores the current TiKV data (not the entire TiKV dataset). However, before using BR to restore data, it requires each TiKV to have the full dataset, so the backups from other TiKVs need to be copied over to each other. This requirement for restoration is quite strange. Why is it necessary to operate this way? Is this still required in higher versions? Copying is quite troublesome. How is it generally used in actual production? Do you mount a large shared disk and then save all the backups generated by BR on the shared disk?

| username: TiDBer_JUi6UvZm | Original post link

The outlined steps above are not understood. They seem a bit redundant.

| username: zhaokede | Original post link

It might not take up bandwidth, so the speed should be a bit faster.

| username: zhaokede | Original post link

That’s how it’s explained in the 303 tutorial.

| username: wangkk2024 | Original post link

Just mount the disk.

| username: 我是咖啡哥 | Original post link

Mount a shared disk, this is basically the requirement for distributed systems.

| username: TiDBer_JUi6UvZm | Original post link

Currently, after backing up TiKV, wouldn’t it be sufficient to use the current TiKV backup for restoration? Does the storage of regions in the current TiKV change every time it is restored?

| username: 随缘天空 | Original post link

The official demonstration is just to show you how to use it; in a real production environment, it definitely wouldn’t be operated this way. In a production environment, you would either set up an NFS file sharing system yourself or use some S3 file service system. This way, when backing up and restoring, you can directly specify the file sharing system directory in the path.

| username: TiDBer_JUi6UvZm | Original post link

Alright, not having shared storage is still a bit troublesome.

| username: zhanggame1 | Original post link

Both NFS and MinIO for S3 sharing can be used for private data center deployment.

| username: stephanie | Original post link

Configuring a shared disk for backups is more convenient. If you back up on each KV node, you will need to copy them together during recovery, which is particularly troublesome.

| username: paulli | Original post link

Mount shared disk

| username: 随缘天空 | Original post link

Yes, this is just a demonstration to familiarize everyone with the recovery steps. In actual situations, enterprises definitely wouldn’t do it this way. After all, with a large number of cluster nodes and replicas, this operation is very inconvenient and prone to errors. The video is for learning purposes only.

| username: shigp_TIDBDBER | Original post link


| username: shigp_TIDBER | Original post link

Yes, the official release is for reference only.

| username: jiayou64 | Original post link

The test demonstration environment should be considered unnecessary. The production environment definitely has sufficient storage, and BR physical backup requires complete data to maintain consistency. I see that the experimental content of 301 and 303 is almost the same.

| username: zhang_2023 | Original post link

Mount a disk specifically for backups.

| username: dba远航 | Original post link

In a production environment, settings should be configured according to your own storage conditions.

| username: TiDBer_QYr0vohO | Original post link

Mount NAS

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.