Error in Synchronizing Data with flink-connector-tidb-cdc

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: flink-connector-tidb-cdc同步数据报错

| username: dadong13

【TiDB Usage Environment】
【TiDB Version】
【Reproduction Path】What operations were performed to encounter the issue
【Encountered Issue:】When reading TiDB data through flink-connector-tidb-cdc, the synchronization mode is: ‘scan.startup.mode’ = ‘latest-offset’. Initially, there were no errors, but after a while, errors occurred: failed to get member from pd server and UNAVAILABLE: Keepalive failed. The connection is likely gone. Also, the synchronized data cannot be seen.
【Resource Configuration】

[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (2/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863341785118, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (7/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863079641102, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (7/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863341785118, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (4/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863341785118, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (4/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863603929106, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (8/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863341785118, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (8/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863603929106, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (2/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863603929106, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (1/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863341785118, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (1/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863603929106, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (1/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863879180289, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (5/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863603929106, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (5/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863879180289, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (6/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863603929106, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (8/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863879180289, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (6/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863879180289, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (4/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863879180289, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (3/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863603929106, regionId: 213496
[Legacy Source Thread - Source: TableSourceScan(table=[[default_catalog, default_database, time_record]], fields=[id, employee_id, date, fence_status]) (3/8)#0] INFO org.tikv.cdc.CDCClient - handle resolvedTs: 437453863879180289, regionId: 213496
[PDClient-update-leader-pool-0] WARN org.tikv.common.PDClient - failed to get member from pd server.
org.tikv.shade.io.grpc.StatusRuntimeException: UNAVAILABLE: Keepalive failed. The connection is likely gone
at org.tikv.shade.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:287)
at org.tikv.shade.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:268)
at org.tikv.shade.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:175)
at org.tikv.kvproto.PDGrpc$PDBlockingStub.getMembers(PDGrpc.java:1868)
at org.tikv.common.PDClient.getMembers(PDClient.java:443)
at org.tikv.common.PDClient.tryUpdateLeader(PDClient.java:565)
at org.tikv.common.PDClient.lambda$initCluster$15(PDClient.java:730)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

| username: neilshen | Original post link

I recommend using TiCDC to sync to Kafka and then consuming it with Flink. https://docs.pingcap.com/zh/tidb/stable/manage-ticdc#sink-uri-配置-kafka

| username: dadong13 | Original post link

Are there any issues when directly using flink-connector-tidb-cdc to consume data? Or what problems exist with flink-connector-tidb-cdc?

| username: neilshen | Original post link

The flink-connector-tidb-cdc is contributed by the community and has not been officially tested.

| username: dadong13 | Original post link

Okay, thank you.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.