How to Clear Incorrect Warning Messages in Prometheus?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 如何清除Prometheus错误的警告信息?

| username: Steve阿辉

This 69 is something I don’t want. It has already been scaled down and cleared. Now there are a total of 9 warnings.

Does anyone have experience with Prometheus? Because I didn’t migrate the PD leader and directly scaled down, there was an anomaly in the monitoring, but the cluster is normal. The issue found is that the monitoring information on Prometheus was not properly cleaned up, leading to cluster warning errors.

I want to enter Prometheus to modify the configuration file’s labels and other information, then restart Prometheus, which should resolve the issue.

| username: GreenGuan | Original post link

reload Prometheus

| username: Steve阿辉 | Original post link

Could you please explain in more detail? My technical foundation is relatively weak. How to operate, what commands to use, and in which directory, etc.

| username: GreenGuan | Original post link

Is it managed by TiDB Operator or TiUP?

| username: Steve阿辉 | Original post link

tiup manages clusters

| username: GreenGuan | Original post link

tiup cluster reload -N your Prometheus IP:9090 (if nothing has been changed)

| username: Steve阿辉 | Original post link

[tidb@tiup-new ~]$ tiup cluster reload -N 172.16.16.66:9090
tiup is checking updates for component cluster …
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.11.3/tiup-cluster reload -N 172.16.16.66:9090
Reload a TiDB cluster’s config and restart if needed

Usage:
tiup cluster reload [flags]

Flags:
–force Force reload without transferring PD leader and ignore remote error
-h, --help help for reload
–ignore-config-check Ignore the config check result
-N, --node strings Only reload specified nodes
-R, --role strings Only reload specified roles
–skip-restart Only refresh configuration to remote and do not restart services
–transfer-timeout uint Timeout in seconds when transferring PD and TiKV store leaders, also for TiCDC drain one capture (default 600)

Global Flags:
-c, --concurrency int max number of parallel tasks allowed (default 5)
–format string (EXPERIMENTAL) The format of output, available values are [default, json] (default “default”)
–ssh string (EXPERIMENTAL) The executor type: ‘builtin’, ‘system’, ‘none’.
–ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don’t need an SSH connection. (default 5)
–wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don’t fit. (default 120)
-y, --yes Skip all confirmations and assumes ‘yes’
[tidb@tiup-new ~]$

| username: Steve阿辉 | Original post link

Do I still need to add parameters?

| username: GreenGuan | Original post link

You are missing the name of your TiDB cluster. The complete command is:
tiup cluster reload your_tidb_cluster_name -N 172.16.16.66:9090

| username: Steve阿辉 | Original post link

It has been fixed, thank you very much.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.