Found abnormal memory usage for many update and commit operations in the PROCESSLIST system table

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 查询系统表PROCESSLIST发现很多update和commit操作的mem内存使用异常的大

| username: 顾刚-数据

[TiDB Usage Environment] Production Environment
[TiDB Version] v4.0.0
[Reproduction Path] Perform update operations on table data
[Encountered Problem: Symptoms and Impact] The system table PROCESSLIST shows that many update and commit operations are using an abnormally large amount of memory, and the CPU usage of the two TiDB nodes has reached over 90%. The entire cluster’s connections are abnormal, and simple queries are extremely slow.

[Resource Configuration] A TiDB cluster with 30 servers


The xxl_job table in the figure actually has only two rows of data. Theoretically, updating the data should not consume such a large amount of memory.

| username: zhanggame1 | Original post link

This mem column is useless, don’t consider it as having any reference value.

| username: 小龙虾爱大龙虾 | Original post link

Low version bug, your server doesn’t even have that much memory. 4.0 is too old, upgrade it.

| username: zhanggame1 | Original post link

Higher versions are like this too, don’t look at this value anymore.

| username: 江湖故人 | Original post link

Please escalate this issue, it’s not an isolated case anymore.

| username: 小龙虾爱大龙虾 | Original post link

Submit an issue and include the reproduction steps.

| username: 江湖故人 | Original post link

It looks like an old issue, someone has already raised an issue about it.
mem field in information_schema.processlist display abnormal · Issue #18588 · pingcap/tidb · GitHub

| username: Jellybean | Original post link

How about the Grafana monitoring charts and the machine’s memory usage?

Please also post the execution plan of this SQL statement using explain, so it can be analyzed.

| username: dba远航 | Original post link

That value looks obviously fake.

| username: andone | Original post link

No reference value, can be ignored.

| username: 顾刚-数据 | Original post link

The CPU usage of both nodes in the TiDB cluster exceeds 90%, causing JDBC connections to the TiDB cluster to be abnormal. After resolving the memory exception and killing the process, the cluster returned to normal.

| username: 顾刚-数据 | Original post link

The CPU usage of both nodes in the TiDB cluster exceeded 90%, causing abnormal JDBC connections to the TiDB cluster. I checked the system table and found this abnormal phenomenon. After killing the process with abnormal memory usage, the cluster returned to normal.

| username: 顾刚-数据 | Original post link

Yes, the entire cluster doesn’t have that much memory. However, the cluster was in an abnormal state for one or two hours at that time. JDBC connections to the cluster couldn’t be established, and simple queries were very slow. Then, I checked the system tables and found this memory anomaly. After the memory anomaly was resolved, the cluster also recovered. So, I want to ask what caused this large memory anomaly.

| username: 顾刚-数据 | Original post link

The CPU usage of both nodes in the TiDB cluster is above 90%, and there are exceptions in the backend JDBC connection to the TiDB cluster. Then I checked the system table and found this phenomenon of unusually large memory. Does this mean that the two are not related?

| username: 连连看db | Original post link

Upgrade it, it’s too old.

| username: 小龙虾爱大龙虾 | Original post link

+1 Let’s upgrade first, this version is already too old.

| username: Jellybean | Original post link

Confirm the execution plan of the corresponding SQL statement’s explain, and check your TiDB logs.

| username: tidb菜鸟一只 | Original post link

The “mem” here should not reflect the memory usage of the current SQL, but rather the cumulative memory usage of your connection. You are currently running an update SQL, but there might have been a large query SQL before this, so the cumulative memory usage is high. However, this does not mean that the current connection is still occupying a lot of memory.