TiCDC v6.5.1 Does Not Synchronize Data to Downstream

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: TiCDC v6.5.1 不往下游同步数据

| username: kkpeter

【TiDB Usage Environment】Production Environment / Testing / PoC
【TiDB Version】v6.5.1
【Reproduction Path】Operations performed that led to the issue
【Encountered Issue: Problem Phenomenon and Impact】
Unified Sorter on disk data size, this value keeps increasing, but CDC is not writing data downstream
【Resource Configuration】
【Attachments: Screenshots / Logs / Monitoring】


| username: zhanggame1 | Original post link

Is it because the data volume is too large and the memory is insufficient? Are you using a mechanical disk for TiCDC?

| username: kkpeter | Original post link

The SSD being used, memory usage is less than 1/10.

| username: kkpeter | Original post link

The key issue right now is that no one knows what CDC is actually doing.

| username: kkpeter | Original post link

The tidb_gc_life_time parameter is used to control the garbage collection (GC) life cycle in TiDB. The default value is 10m, which means that data older than 10 minutes will be considered for GC. You can adjust this parameter according to your needs. For example, if you want to keep data for a longer period before it is collected by GC, you can set it to a larger value, such as 24h.

| username: xfworld | Original post link

Check the status and running logs of the changfeed.

| username: dba-kit | Original post link

Is there a large transaction? You can check the size of the output count to see if data is being synchronized.

| username: kkpeter | Original post link

There should be a large transaction.

| username: dba-kit | Original post link

It seems that version 6.5.1 will split large transactions by default. Did you specifically specify not to split transactions when creating the changefeed?

| username: asddongmen | Original post link

  1. What is the downstream? Check the monitoring for downstream write latency.
  2. Can you provide more monitoring metrics, such as the metrics in the dataflow section?
  3. Check the monitoring in the lag analyze section.
  4. Is ResolvedTs progressing? Or is it still stuck?
  5. Check the logs for any errors, especially for the keyword “too long.”

If possible, please provide anonymized logs for troubleshooting, thank you.

| username: redgame | Original post link

Take a look at lag analyze