Error in DM execution: invalid connection

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: dm执行报错 :invalid connection

| username: 大钢镚13146

[TiDB Usage Environment] Production Environment / Testing / Poc
[TiDB Version]
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration]
[Attachments: Screenshots / Logs / Monitoring]
Execution Error:

“ErrCode”: 32001,
“ErrClass”: “dump-unit”,
“ErrScope”: “internal”,
“ErrLevel”: “high”,
“Message”: "mydumper/dumpling runs with error, with output (may empty): ",
“Workaround”: “”

The backend logs are also like this

The connection between the frontend and backend is normal. I can see the SQL statements executed by the frontend and the connections of the backend users.
Occasionally, the following error occurs

| username: 大钢镚13146 | Original post link

Restarted the task, but it didn’t work.

| username: 小龙虾爱大龙虾 | Original post link

Does it report immediately after the task is created?

| username: 大钢镚13146 | Original post link

A dozen seconds

| username: 大钢镚13146 | Original post link

The front end of MySQL can clearly see the SQL for data extraction, and the back end of TiDB can also see the DM user connection. It doesn’t seem to be a connection issue.

| username: Billmay表妹 | Original post link

You can try the following steps:

  1. Ensure the TiDB database connection information is correct: Check your connection configuration, including hostname, port number, username, and password, to ensure it matches the TiDB database connection information.
  2. Check the network connection: Ensure your network connection is normal. You can try using other tools (such as MySQL client) to connect to the TiDB database to verify if the connection is normal.
  3. Check the status of the TiDB database: Ensure the TiDB database is running normally and there are no other anomalies or errors. You can check the database status through TiDB’s monitoring tools or logs.
  4. Check permissions: Ensure the database account you are using has sufficient permissions to execute mydumper or dumpling operations. You can try using an account with higher permissions or contact the database administrator for permission settings.
| username: 小龙虾爱大龙虾 | Original post link

Test the SQL that is causing the error.

| username: 大钢镚13146 | Original post link

We have no issues running a single table, so it doesn’t seem like a connection problem. The keyword causing the error has been commented out, so we can’t see the specific SQL.

| username: 大钢镚13146 | Original post link

context canceled

| username: 有猫万事足 | Original post link

This error usually occurs when the upstream MySQL cannot be connected. From the machine where the dm_worker is located, try connecting to the upstream MySQL based on your source configuration.

| username: 大钢镚13146 | Original post link

It can be connected. I can see the dumped SQL statements and the connection threads of the backend TiDB.

| username: forever | Original post link

Is the target database load and the DM machine load high? You can do a long ping with a large packet and also observe the network.

| username: 大钢镚13146 | Original post link

Target? This is the dump stage, it doesn’t seem to reach the target step yet.
Dumping two of the tables is not a problem, the backend TiDB is empty with no data.

| username: Hacker007 | Original post link

Based on my practice, it should be that there is too much data in the upstream table. Try increasing the max-allowed-packet parameter. If that doesn’t work, you can split the synchronized database into multiple tasks.

| username: andone | Original post link

Increase max-allowed-packet

| username: dba远航 | Original post link

Check connection-related parameters, including the number of connections, allowed packet size, connection time, etc.

| username: 随缘天空 | Original post link

I can’t view images directly. Please provide the text you need translated.

| username: 大钢镚13146 | Original post link

I have already tried adjusting these things before, but it didn’t have much effect.

| username: 大钢镚13146 | Original post link

Set it to 1G, but it still doesn’t work.

| username: 大钢镚13146 | Original post link

The trouble is that this is continuously synchronized. There might be table creation operations later on. If you filter by table, the table creation DDL might not be synchronized.