Memory Release in tidb-server

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tidb-server内存释放

| username: TiDBer_pkQ5q1l0

[TiDB Usage Environment] Production Environment
[TiDB Version] 5.2.1
[Encountered Problem: Phenomenon and Impact]
The tidb-server slow log files are too many, causing the memory to almost explode. We have currently cleaned up the number of slow log files. How can we release the memory without restarting the tidb-server?



| username: xingzhenxiang | Original post link

In this situation, to avoid any impact, I directly reload. My read and write operations are separated.

| username: xfworld | Original post link

  • Optimize slow queries to reduce memory usage…
  • Shorten the GC interval to improve GC efficiency
  • Upgrade to version 6.5.X, as higher versions of Golang offer more efficient memory reclamation mechanisms, which can reduce the risk of OOM
| username: TiDBer_pkQ5q1l0 | Original post link

The production environment cannot be upgraded casually.

| username: xfworld | Original post link

Looking for resources, directly synchronize data, and then redo the POC.

| username: TiDBer_pkQ5q1l0 | Original post link

Reload will restart the TiDB process, right?

| username: xingzhenxiang | Original post link

Yes, I have separated read and write operations, with HAProxy in front for load balancing. Restarting one machine is not a problem. Without this setup, all SQL queries sent to the machine with exhausted memory would become slow queries, which would also affect business usage.

| username: tidb菜鸟一只 | Original post link

If there are multiple TiDB nodes, reloading a TiDB node has minimal impact.
If you really don’t want to restart, try executing this command as root to see if it can free up some cache:
echo 3 > /proc/sys/vm/drop_caches

| username: 人如其名 | Original post link

Theoretically, running continuously will either cause the tidb-server to experience an OOM (Out of Memory) error or complete execution without entering an infinite loop. However, based on my previous experiences, the issue is usually resolved only by restarting the tidb-server. The official fix merely added memory tracking to avoid excessive memory usage when parsing slow logs. From the screenshots, it appears that a lot of memory is consumed during the execution of the getOneLine function, likely due to numerous batch statements in your slow logs, such as thousands of insert values rows. Related case of the official fix: executor: add memory tracker for quering slow_query to avoid TiDB server oom by crazycs520 · Pull Request #33953 · pingcap/tidb · GitHub

| username: TiDBer_pkQ5q1l0 | Original post link

Okay, thank you, I’ll wait for it to OOM on its own.

| username: 胡杨树旁 | Original post link

How is read-write separation implemented? Is there a separate tidb-server node for writing? Do the other tidb-server nodes handle read operations?

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.