Can the storage retention time of topSQL be modified?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: topSQL的存储保留时间可以修改么?

| username: dba-kit

Currently, there is a tsdb directory in the Prometheus data directory with a large amount of data. Is there any way to clean it up?

| username: tidb菜鸟一只 | Original post link

  • storage_retention: Prometheus monitoring data retention time, default is “30d”
    You can set it to a shorter duration.
| username: dba-kit | Original post link

This has already been changed to 15 days, but although tsdb is in the prometheus data directory, it should be written by some component of the dashboard and is not governed by this parameter, right?

| username: ShawnYan | Original post link

| username: 啦啦啦啦啦 | Original post link

Top SQL is not stored in Prometheus, right?

| username: Kamner | Original post link

You can modify the collection frequency

Log in to the Prometheus server and find the installation directory /tidb/tidb-deploy/prometheus-9090/conf/prometheus.yml

Modify the scrape_interval parameter. Each metric can be set individually, but here we set it uniformly to 30s.

---
global:
  scrape_interval:     30s # By default, scrape targets every 15 seconds.
  evaluation_interval: 30s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).
  external_labels:
    cluster: 'tidb'
    monitor: "prometheus"

...

  - job_name: "blackbox_exporter_10.0.XXX.XXX:9115_icmp"
    scrape_interval: 30s
    metrics_path: /probe
    params:
      module: [icmp]
    static_configs:

Restart the Prometheus component

tiup cluster restart datamart --role prometheus

| username: dba-kit | Original post link

Now it is more standardized to modify through monitoring_servers.storage_retention, and there is no need to worry about Prometheus automatically reloading during scaling operations.

| username: dba-kit | Original post link

Uh, it seems you guys missed the point. It’s not that Prometheus’s storage is large, but rather the data volume in the tsdb directory is large, and this directory should be written by other components.

| username: mono | Original post link

This is time-series data. Because there are many metrics collected frequently, it takes up space. Either disable certain metrics or reduce the data retention period.

| username: dba-kit | Original post link

I checked with lsof, and the tsdb directory is written by the ng-monitoring component.

| username: dba-kit | Original post link

I see that ng-monitoring-server has a --retention-period parameter, but there is no entry for it in the configuration file.

| username: dba-kit | Original post link

Manually adding retention-period = "15d" to /home/tidb/tidb-deploy/prometheus-9090/conf/ngmonitoring.toml did not take effect. In the end, I deleted the directory by running rm -r tsdb and then restarted the ng-monitoring-server, which finally freed up the space. (A new tsdb directory will be automatically generated after the restart)

| username: Kamner | Original post link

I have also encountered your problem, and it was resolved by modifying the retention time and collection frequency.

  1. Modify cluster parameters
    Add storage_retention: 20d

  2. Modify collection frequency

The storage format of Prometheus is tsdb, and rm can certainly free up space. However, this is just a temporary solution. Check out the official Prometheus documentation:

| username: dba-kit | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.