How do you back up 200TB and 1000TB of data?

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 200t 1000t的数据你们是如何做备份的?

| username: tidb狂热爱好者

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version]
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration] Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachment: Screenshot/Logs/Monitoring]
Due to cost issues, we chose a single-node TiDB with local disk. We are worried about data loss and need to perform backups.

| username: yulei7633 | Original post link

Full backup once a week on Saturday, incremental backup every 7 days otherwise.

| username: 像风一样的男子 | Original post link

This data volume can only allow for backing up important tables.

| username: 考试没答案 | Original post link

  1. Consider low cost–storage fees are low enough.
  2. Consider security–storage architecture is highly available.
  3. Consider usability–this type of data is generally logs, non-core.

Traditional full backups and 7-day backups are almost impossible. Move to the cloud. OSS storage??

| username: DBAER | Original post link

This volume should be AP analysis data, and it should be stored in distributed storage or object storage to save costs.

| username: lemonade010 | Original post link

For this level of data volume, even with backups, recovery would take a disaster-level amount of time. You can only do a full backup over a long period, such as once a month, and then do incremental backups.

| username: yiduoyunQ | Original post link

~300T BR full backup restore

| username: zhanggame1 | Original post link

Even with a backup, recovery is still difficult.

| username: TiDBer_jYQINSnf | Original post link

There isn’t such a large amount as 1000TB, at most it’s around 40TB. Perform a full backup with BR once a week, then incremental backups. Back up to S3.

| username: TiDBer_jYQINSnf | Original post link

Is your single node a single replica of TiKV? If it is a single replica of TiKV, it is still very prone to problems. Even with backups, you can’t guarantee that no data will be lost, as backups can’t be done in real-time. If it’s just a single node of TiDB, the impact is not significant.

| username: porpoiselxj | Original post link

Without considering the cost, setting up an identical cluster for TiCDC real-time incremental backup is a good idea. Given the volume, a full database backup is probably not feasible. It’s not a matter of capacity, but rather the impact of a full database backup on the primary database, especially during the final verification phase.

| username: Kongdom | Original post link

It is estimated that only full backups can be scheduled periodically, with incremental backups in between.

| username: TiDBer_5cwU0ltE | Original post link

Frequent full backups are not suitable. Full + incremental backups should suffice. Alternatively, you can use storage technology, but that is more expensive.

| username: 江湖故人 | Original post link

You can only reduce the backup frequency and select important tables to back up.

| username: 路在何chu | Original post link

Backup by splitting databases and tables in batches.

| username: Soysauce520 | Original post link

Bro, the only limitations on backup speed are bandwidth and disk speed.

| username: forever | Original post link

Performing a full backup once a month and incremental backups every day in between often affects normal business operations, consuming IO and bandwidth.

| username: wakaka | Original post link

Important table backup. Increase the GC interval.

| username: 我是人间不清醒 | Original post link

I haven’t encountered such a large amount

  1. Multi-center
  2. Backup important tables
  3. Is such a large amount all hot data? Separate hot and cold data
| username: 随缘天空 | Original post link

The data volume is too large, and a full backup in the short term is unrealistic. Additionally, the backup operation will affect cluster performance. It is best to perform full backups periodically and incremental backups for important tables. Ideally, store these backups in a distributed file system, as the risk with local disks is too high. If they get damaged, the data might be lost.