Can Prometheus memory usage be limited through configuration?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: Prometheus占用内存,能否通过配置限制下内存使用

| username: xiaoxiaozuofang

[TiDB Usage Environment] Production Environment
[TiDB Version] tidb6.1.0
[Reproduction Path] What operations were performed that caused the issue
[Encountered Problem: Problem Phenomenon and Impact] Prometheus occupies memory, can the memory usage be limited through configuration
[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]

| username: DBRE | Original post link

It seems that Prometheus does not have a memory limit configuration, but it can be resolved by reducing the number of collection items and the collection interval.

  1. Adjust the scrape_interval:
  2. Add the following lines to the job: tikv section in the prometheus.yml file to reduce some of the tikv collections.
    There are just too many tidb collection items.
metric_relabel_configs:
  - source_labels: [__name__]
    separator: ;
    regex: tikv_thread_nonvoluntary_context_switches|tikv_thread_voluntary_context_switches|tikv_threads_io_bytes_total
    action: drop
  - source_labels: [__name__,name]
    separator: ;
    regex: tikv_thread_cpu_seconds_total;(tokio|rocksdb).+
    action: drop
| username: zhanggame1 | Original post link

Monitoring can be deployed on a separate machine.