After scaling down the CDC component, the safe_point does not advance

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: cdc组件缩容以后,safe_point不往前推进

| username: chenhanneu

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version]
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration] Enter TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots/Logs/Monitoring]
Initial Environment:
Two TiCDC nodes, configured with a task to synchronize to MySQL.
Operation:
Remove the CDC task.
Scale down the two CDC nodes.
Phenomenon:
service-gc-safepoint:
{
“service_id”: “ticdc-default-14681978075909900690”,
“expired_at”: 1704870439,
“safe_point”: 446898907036516360
}
The safe_point of CDC prevents GC from advancing. Normally, when there are no CDCs, the safe_point of CDC should be automatically cleaned up. Is there a way to manually clean it up, or do we have to wait for it to expire after a day?

| username: 有猫万事足 | Original post link

I have seen a solution before.

| username: 小龙虾爱大龙虾 | Original post link

Just wait for it to expire. I haven’t paid attention to the issue of service GC safepoint remaining after normally deleting changefeed. Learned something new.

| username: chenhanneu | Original post link

Confirmed as a bug, and it has been fixed in the latest versions of 6.5 and 7.1.

| username: dba远航 | Original post link

It turned out to be a BUG.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.