Can DM share monitoring_servers with the TiDB cluster?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: DM可以和TiDB集群共享monitoring_servers吗

| username: TiDBer_iLonNMYE

[TiDB Usage Environment] Testing
[TiDB Version] V5.4.3
[Reproduction Path] What operations were performed to cause the issue
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]
The TiDB cluster has already deployed monitoring_servers, grafana_servers, and alertmanager_servers. The data volume is not large, and DM is used to migrate data from MySQL. The DM deployment template also includes monitoring_servers, grafana_servers, and alertmanager_servers. Can they share the same set as the TiDB cluster? Thank you.

| username: Lucien-卢西恩 | Original post link

Sure, you can configure it on Prometheus. Additionally, deploying DM with TiUP should automatically merge it into a single Prometheus + Grafana + Alertmanager setup.

| username: srstack | Original post link

Remember to configure ignore_exporter for all hosts in DM topo.yaml.

| username: Jellybean | Original post link

You use and manage the cluster with the same tiup to deploy dm, and their Prometheus Grafana is shared, the same set.

| username: dba-kit | Original post link

First, when deploying the DM cluster, remember to note that by default, node_exporter and blackbox_exporter (these components collect machine load) will not be deployed. You need to manually specify the ports (if mixed deployment, be careful to separate them from TiDB ports).

# if monitored is set, node_exporter and blackbox_exporter will be
# deployed with the port specified, otherwise they are not deployed
# on the server to avoid conflict with tidb clusters
#monitored:
#  node_exporter_port: 9100
#  blackbox_exporter_port: 9115
| username: dba-kit | Original post link

Here is my current configuration. Prometheus, Grafana, and Alertmanager are all reusing existing company components as much as possible. You can refer to it.

monitoring_servers:
- host: xxx.xxx.xxx.xxx
  # Write a copy of the monitoring data to the remote Prometheus (requires remote_write enabled on the remote end). The remote Prometheus will upload historical monitoring data to S3 through the Thanos component for easy historical queries.
  remote_config:
    remote_write:
    - url: http://prometheus-proxy.xxxxxx.com/api/v1/write
  # Reuse the company's Alertmanager for easy integration with the existing alert system.
  external_alertmanagers:
  - host: alertmanager.xxxxxx.com
    web_port: 80
  # Since the monitoring information is written to the remote end, only 15 days need to be retained locally.
  storage_retention: 15d
  # Facilitate the adjustment of alert rules.
  rule_dir: /root/deploy-config/tidb-config/rules
  # The default instance label is IP, which is not intuitive on the dashboard. Here, it is manually changed to recognizable text.
  additional_scrape_conf:
    relabel_configs:
    - source_labels:
      - __address__
      target_label: target
    - regex: xxx.xxx.xxx.xxx:(.*)
      replacement: tikv6-sata-e001:$1
      source_labels:
      - __address__
      target_label: instance
| username: TiDBer_iLonNMYE | Original post link

Got it all, thank you everyone!

| username: h5n1 | Original post link

Master, how exactly do you do this? I tried specifying DM and TiDB’s agreed Prometheus Grafana, but in the end, DM directly overwrote TiDB’s.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.