Can slow logs be set to one file per day?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 慢日志可以设置成一天一个文件吗?

| username: Johnpan

Can the slow log in TiDB 5.4.0 be set to one file per day? Like MySQL, can it be customized for splitting?

To improve efficiency, please provide the following information. Clear problem descriptions can be resolved faster:
【TiDB Usage Environment】

【Overview】 Scenario + Problem Overview

【Background】 What operations have been done

【Phenomenon】 Business and database phenomena

【Problem】 Current issues encountered

【Business Impact】

【TiDB Version】

【Application Software and Version】

【Attachments】 Relevant logs and configuration information

  • TiUP Cluster Display Information
  • TiUP Cluster Edit config Information

Monitoring (https://metricstool.pingcap.com/)

  • TiDB-Overview Grafana Monitoring
  • TiDB Grafana Monitoring
  • TiKV Grafana Monitoring
  • PD Grafana Monitoring
  • Corresponding module logs (including logs 1 hour before and after the issue)

If the question is related to performance optimization or troubleshooting, please download the script and run it. Please select all and copy-paste the terminal output results for upload.

| username: xfworld | Original post link

No, refer to the configuration related to slow queries

| username: HACK | Original post link

Needs improvement…

| username: 天蓝色的小九 | Original post link

We need to wait for this.

| username: 啦啦啦啦啦 | Original post link

Refer to this

| username: wuxiangdong | Original post link

Write a script. I also wrote the script for MySQL myself.

| username: Johnpan | Original post link

Sure, I also write scripts for MySQL. Can you provide a reference script for TiDB?
Additionally, the slow queries in TiDB’s web interface are based on the slow_query table, right? Will it stop working after partitioning?

| username: Kongdom | Original post link

It is based on the cluster_slow_query table.

| username: 张雨齐0720 | Original post link

I saw someone else wrote that it is already possible.

| username: zhouzeru | Original post link

The usual operation is to write a script.

| username: TiDBer_CEVsub | Original post link

Looking forward to the implementation in future versions, or using Python for log splitting.

| username: Johnpan | Original post link

Okay, regardless of whether it’s the slow_query or cluster_slow_query table, the web display of slow queries won’t work after splitting, right?

| username: Kongdom | Original post link

I don’t understand. The slow query feature has nothing to do with splitting, right? You’re talking about splitting files, not tables.