TiCDC tasks do not fail even if the target table no longer exists

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: ticdc任务不失败,即使目标端的表已经不存在了

| username: TiDBer_5VobY5Th

Created a synchronization task with the following content:
{
“upstream_id”: 7369155951273047833,
“namespace”: “default”,
“id”: “ee67a2ab-338f-44ee-8e80-c382fa2aa0b6”,
“sink_uri”: “mysql://root:xxxxx@10.20.50.88:4000”,
“create_time”: “2024-05-27T23:38:58.281599629-04:00”,
“start_ts”: 450066520525242371,
“config”: {
“memory_quota”: 1073741824,
“case_sensitive”: false,
“force_replicate”: true,
“ignore_ineligible_table”: false,
“check_gc_safe_point”: true,
“enable_sync_point”: false,
“enable_table_monitor”: false,
“bdr_mode”: false,
“sync_point_interval”: 600000000000,
“sync_point_retention”: 86400000000000,
“filter”: {
“rules”: [“test.", "test2.”]
},
“mounter”: {
“worker_num”: 16
},
“sink”: {
“delete_only_output_handle_key_columns”: null,
“content_compatible”: null,
“advance_timeout”: 150,
“send_bootstrap_interval_in_sec”: 120,
“send_bootstrap_in_msg_count”: 10000,
“send_bootstrap_to_all_partition”: true,
“debezium_disable_schema”: false
},
“consistent”: {
“level”: “none”,
“max_log_size”: 64,
“flush_interval”: 2000,
“meta_flush_interval”: 200,
“encoding_worker_num”: 16,
“flush_worker_num”: 8,
“use_file_backend”: false,
“memory_usage”: {
“memory_quota_percentage”: 50
}
},
“scheduler”: {
“enable_table_across_nodes”: false,
“region_threshold”: 100000,
“write_key_threshold”: 0
},
“integrity”: {
“integrity_check_level”: “none”,
“corruption_handle_level”: “warn”
},
“changefeed_error_stuck_duration”: 1800000000000,
“synced_status”: {
“synced_check_interval”: 300,
“checkpoint_interval”: 15
}
},
“state”: “warning”,
“creator_version”: “v8.0.0”,
“resolved_ts”: 450066575326445573,
“checkpoint_ts”: 450066553712672769,
“checkpoint_time”: “2024-05-27 23:40:51.797”
}

During testing, even if a table on the target end is deleted and then the source table is modified and records are inserted, querying the task status through
curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/ee67a2ab-338f-44ee-8e80-c382fa2aa0b6
still shows the task status as normal without any error indication.

| username: Jellybean | Original post link

Checking the CDC operation logs should show an exception indicating that the sink cannot function properly.

After a certain number of internal retries, the synchronization task will either report an error or get stuck, resulting in a delay alarm.

| username: zhaokede | Original post link

Are there any errors reported in the execution logs?

| username: 小龙虾爱大龙虾 | Original post link

Do you think this is still progressing normally?