Is it possible to deploy Prometheus in an existing TiDB cluster?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 可以在已有的tidb集群中部署Prometheus么?

| username: TiDBer_b1iRkG7I

Background: A TiDB cluster has already been deployed using the tiup tool.

Question: How to deploy Prometheus in the TiDB cluster?

| username: Billmay表妹 | Original post link

See the documentation: 集群监控部署 | PingCAP 文档中心

| username: caiyfc | Original post link

Sure, just expand one more.

| username: zhanggame1 | Original post link

Just expand the capacity, refer to what I have tested.
Expand monitoring:
vi scale-out.yml

monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

monitoring_servers:
 - host:  127.0.0.1

grafana_servers:
 - host:  127.0.0.1

Expand

root@tidb:~# tiup cluster scale-out tidb-test scale-out.yml -u root -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.14.1/tiup-cluster scale-out tidb-test scale-out.yml -u root -p
You have one or more of ["global", "monitored", "server_configs"] fields configured in
        the scale out topology, but they will be ignored during the scaling out process.
        If you want to use configs different from the existing cluster, cancel now and
        set them in the specification fields for each host.
Do you want to continue? [y/N]: (default=N) y
Input SSH password:

+ Detect CPU Arch Name
  - Detecting node 127.0.0.1 Arch info ... Done

+ Detect CPU OS Name
  - Detecting node 127.0.0.1 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidb-test
Cluster version: v7.6.0
Role        Host       Ports       OS/Arch       Directories
----        ----       -----       -------       -----------
prometheus  127.0.0.1  9090/12020  linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana     127.0.0.1  3000        linux/x86_64  /tidb-deploy/grafana-3000
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ Download TiDB components
  - Download prometheus:v7.6.0 (linux/amd64) ... Done
  - Download grafana:v7.6.0 (linux/amd64) ... Done
+ Initialize target host environments
+ Deploy TiDB instance
  - Deploy instance prometheus -> 127.0.0.1:9090 ... Done
  - Deploy instance grafana -> 127.0.0.1:3000 ... Done
+ Copy certificate to remote host
+ Generate scale-out config
  - Generate scale-out config prometheus -> 127.0.0.1:9090 ... Done
  - Generate scale-out config grafana -> 127.0.0.1:3000 ... Done
+ Init monitor config
Enabling component prometheus
        Enabling instance 127.0.0.1:9090
        Enable instance 127.0.0.1:9090 success
Enabling component grafana
        Enabling instance 127.0.0.1:3000
        Enable instance 127.0.0.1:3000 success
Enabling component node_exporter
        Enabling instance 127.0.0.1
        Enable 127.0.0.1 success
Enabling component blackbox_exporter
        Enabling instance 127.0.0.1
        Enable 127.0.0.1 success
+ [ Serial ] - Save meta
+ [ Serial ] - Start new instances
Starting component prometheus
        Starting instance 127.0.0.1:9090
        Start instance 127.0.0.1:9090 success
Starting component grafana
        Starting instance 127.0.0.1:3000
        Start instance 127.0.0.1:3000 success
Starting component node_exporter
        Starting instance 127.0.0.1
        Start 127.0.0.1 success
Starting component blackbox_exporter
        Starting instance 127.0.0.1
        Start 127.0.0.1 success
+ Refresh components configs
  - Generate config pd -> 127.0.0.1:2379 ... Done
  - Generate config tikv -> 127.0.0.1:20160 ... Done
  - Generate config tidb -> 127.0.0.1:4000 ... Done
  - Generate config prometheus -> 127.0.0.1:9090 ... Done
  - Generate config grafana -> 127.0.0.1:3000 ... Done
+ Reload prometheus and grafana
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Scaled cluster `tidb-test` out successfully

| username: TiDBer_RjzUpGDL | Original post link

Sure.

| username: danghuagood | Original post link

There is no problem with scaling up. The key issue is how to retain historical monitoring data. For example, the old Prometheus has been running for a while and has accumulated some monitoring data. Now, if a new Prometheus is set up, how can the old monitoring data be synchronized to the new Prometheus?

| username: Kongdom | Original post link

:+1: Scaling up and down, forever the best

| username: residentevil | Original post link

:+1: Super awesome

| username: 小于同学 | Original post link

Expansion is possible.

| username: DBAER | Original post link

Expansion is possible.

| username: 这里介绍不了我 | Original post link

Without further ado, just scale up directly.

| username: Kongdom | Original post link

It should support import and export.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.