BackupSchedule Log Backup Failure

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: BackupSchedule的日志备份失败

| username: MagicJie

[TiDB Version] 6.5.2
[Reproduction Path] Operations performed that led to the issue
Created a backup schedule task using BackupSchedule
[Encountered Issue: Symptoms and Impact]
Log backup failed
[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]
I0526 09:46:01.958315 9 backup.go:262] [2023/05/26 09:46:01.958 +00:00] [ERROR] [stream.go:513] [“failed to stream”] [command=“log start”] [error=“failed to commit the change for task log-backup-schedule-s3: etcdserver: too many operations in txn request”] [errorVerbose=“etcdserver: too many operations in txn request\nfailed to commit the change for task log-backup-schedule-s3\ngithub.com/pingcap/tidb/br/pkg/streamhelper.(*MetaDataClient).PutTask\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/streamhelper/client.go:152\ngithub.com/pingcap/tidb/br/pkg/task.RunStreamStart\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/task/stream.go:628\ngithub.com/pingcap/tidb/br/pkg/task.RunStreamCommand\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/task/stream.go:512\nmain.streamCommand\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/stream.go:231\nmain.newStreamStartCommand.func1\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/stream.go:70\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:916\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/main.go:57\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1594”] [stack=“github.com/pingcap/tidb/br/pkg/task.RunStreamCommand\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/task/stream.go:513\nmain.streamCommand\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/stream.go:231\nmain.newStreamStartCommand.func1\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/stream.go:70\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:916\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/main.go:57\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250”]

| username: MagicJie | Original post link

This database has approximately 8000 tables. The logs indicate an observer table. Is it possible that too many tables are being monitored at once, and all operations are placed in one transaction, causing a bug?

| username: zhanggame1 | Original post link

Having too many tables or partitions can easily cause issues. In the past, using Oracle Data Pump to handle tables with tens of thousands of partitions was also quite unstable.

| username: TiDBer_ywlKbJr5 | Original post link

Could you please tell me how to solve this problem?
I also encountered this issue. With around 300 tables being backed up, this failure log appears.

| username: TiDBer_3Cusx9uk | Original post link

Following closely to see if there are any good solutions.