7.1 Installation without Grafana Monitoring

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 7.1 安装没有grafana监控

| username: Hacker_suny

[TiDB Usage Environment] Test
[TiDB Version] 7.1
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.12.3/tiup-cluster display ti
Cluster type: tidb
Cluster name: ti
Cluster version: v7.1.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://172.16.196.204:2379/dashboard
Grafana URL: http://172.16.196.201:3000

[Reproduction Path] Installation
tiup cluster deploy ti v7.1.0 ./topology.yaml
tiup cluster start ti --init
[Encountered Problem: Problem Phenomenon and Impact]
Grafana has no monitoring

[topology.yaml]
global:
user: “tidb”
ssh_port: 22
deploy_dir: “/data/tidb-deploy”
data_dir: “/data/tidb-data”
arch: “amd64”
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
deploy_dir: “/tidb-deploy/monitored-9100”
data_dir: “/tidb-data/monitored-9100”
log_dir: “/tidb-deploy/monitored-9100/log”
pd_servers:

  • host: 172.16.196.202
    ssh_port: 22
    name: “pd01”
    client_port: 2379
    peer_port: 2380
    deploy_dir: “/data/tidb-deploy/pd-2379”
    data_dir: “/data/tidb-data/pd-2379”
    log_dir: “/data/tidb-deploy/pd-2379/log”
    numa_node: “0”
    config:
    schedule.max-merge-region-size: 20
    schedule.max-merge-region-keys: 200000
  • host: 172.16.196.203
    ssh_port: 22
    name: “pd02”
    client_port: 2379
    peer_port: 2380
    deploy_dir: “/data/tidb-deploy/pd-2379”
    data_dir: “/data/tidb-data/pd-2379”
    log_dir: “/data/tidb-deploy/pd-2379/log”
    numa_node: “0”
    config:
    schedule.max-merge-region-size: 20
    schedule.max-merge-region-keys: 200000
  • host: 172.16.196.204
    ssh_port: 22
    name: “pd03”
    client_port: 2379
    peer_port: 2380
    deploy_dir: “/data/tidb-deploy/pd-2379”
    data_dir: “/data/tidb-data/pd-2379”
    log_dir: “/data/tidb-deploy/pd-2379/log”
    numa_node: “0”
    config:
    schedule.max-merge-region-size: 20
    schedule.max-merge-region-keys: 200000
    tidb_servers:
  • host: 172.16.196.202
    port: 4000
    status_port: 10080
    deploy_dir: “/data/tidb-deploy/tidb-4000”
    log_dir: “/data/tidb-deploy/tidb-4000/log”
  • host: 172.16.196.203
    port: 4000
    status_port: 10080
    deploy_dir: “/data/tidb-deploy/tidb-4000”
    log_dir: “/data/tidb-deploy/tidb-4000/log”
  • host: 172.16.196.204
    port: 4000
    status_port: 10080
    deploy_dir: “/data/tidb-deploy/tidb-4000”
    log_dir: “/data/tidb-deploy/tidb-4000/log”
    tikv_servers:
  • host: 172.16.196.202
    port: 20160
    status_port: 20180
    deploy_dir: “/data/tidb-deploy/tikv-20160”
    data_dir: “/data/tidb-data/tikv-20160”
    log_dir: “/data/tidb-deploy/tikv-20160/log”
  • host: 172.16.196.203
    port: 20160
    status_port: 20180
    deploy_dir: “/data/tidb-deploy/tikv-20160”
    data_dir: “/data/tidb-data/tikv-20160”
    log_dir: “/data/tidb-deploy/tikv-20160/log”
  • host: 172.16.196.204
    port: 20160
    status_port: 20180
    deploy_dir: “/data/tidb-deploy/tikv-20160”
    data_dir: “/data/tidb-data/tikv-20160”
    log_dir: “/data/tidb-deploy/tikv-20160/log”
    tiflash_servers:
  • host: 172.16.196.202
    tcp_port: 9000
    flash_service_port: 3930
    flash_proxy_port: 20170
    flash_proxy_status_port: 20292
    metrics_port: 8234
    deploy_dir: “/data_tf/tidb-deploy/tiflash-9000”
    data_dir: “/data_tf/tidb-data/tiflash-9000”
    log_dir: “/data_tf/tidb-deploy/tiflash-9000/log”
  • host: 172.16.196.203
    tcp_port: 9000
    flash_service_port: 3930
    flash_proxy_port: 20170
    flash_proxy_status_port: 20292
    metrics_port: 8234
    deploy_dir: /data_tf/tidb-deploy/tiflash-9000
    data_dir: /data_tf/tidb-data/tiflash-9000
    log_dir: /data_tf/tidb-deploy/tiflash-900/log
  • host: 172.16.196.204
    tcp_port: 9000
    flash_service_port: 3930
    flash_proxy_port: 20170
    flash_proxy_status_port: 20292
    metrics_port: 8234
    deploy_dir: /data_tf/tidb-deploy/tiflash-9000
    data_dir: /data_tf/tidb-data/tiflash-9000
    log_dir: /data_tf/tidb-deploy/tiflash-9000/log
    kvcdc_servers:
  • host: 172.16.196.201
    port: 8600
    data_dir: “/data/tidb-data/tikv-cdc-8600”
    log_dir: “/data/tidb-deploy/tikv-cdc-8600/log”
    monitoring_servers:
  • host: 172.16.196.201
    ssh_port: 22
    port: 9090
    ng_port: 12020
    deploy_dir: “/data/tidb-deploy/prometheus-8249”
    data_dir: “/data/tidb-data/prometheus-8249”
    log_dir: “/data/tidb-deploy/prometheus-8249/log”
    rule_dir: /home/tidb/prometheus_rule
    scrape_interval: 15s
    scrape_timeout: 10s
    grafana_servers:
  • host: 172.16.196.201
    port: 3000
    deploy_dir: /data/tidb-deploy/grafana-3000
    dashboard_dir: /home/tidb/dashboards
    config:
    log.file.level: warning
    alertmanager_servers:
  • host: 172.16.196.201
    ssh_port: 22
    listen_host: 0.0.0.0
    web_port: 9093
    deploy_dir: “/data/tidb-deploy/alertmanager-9093”
    data_dir: “/data/tidb-data/alertmanager-9093”
    log_dir: “/data/tidb-deploy/alertmanager-9093/log”

[Resource Configuration] Enter TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots/Logs/Monitoring]

| username: 像风一样的男子 | Original post link

Please provide specific details about the issue, installation steps, etc.

| username: Hacker_suny | Original post link

Once configured, run:
tiup cluster deploy ti v7.1.0 ./topology.yaml
tiup cluster start ti --init

After installation, accessing Grafana will look just like the screenshot.

| username: 像风一样的男子 | Original post link

Let’s take a look at the topology.yaml configuration file and then a screenshot of the tiup cluster display clustername.

| username: Hacker_suny | Original post link

The image is not visible. Please provide the text you need translated.

| username: 像风一样的男子 | Original post link

Check the /data/tidb-deploy/grafana-3000 directory on the server to see if there are any dashboard files below.

| username: Hacker_suny | Original post link

All the related directories are available.

| username: 像风一样的男子 | Original post link

Are there any JSON files in the dashboards folder?

| username: Hacker_suny | Original post link

There is no JSON file, but I see a dashboard.yaml file under the dashboard directory in the provisioning directory.

| username: buptzhoutian | Original post link

This directory is on the control machine, and tiup will copy the contents of the directory to the Grafana server. Refer to Official Documentation - Topology File Configuration.

| username: Hacker_suny | Original post link

This is on the control machine, and I built it manually.

| username: 像风一样的男子 | Original post link

I just noticed that you pointed the JSON file directory to dashboard_dir: /home/tidb/dashboards, which probably doesn’t have write permissions. I suggest uninstalling Grafana and reinstalling it, then pointing the dashboard_dir to a directory with write permissions.

| username: buptzhoutian | Original post link

Is there a /home/tidb/dashboards directory on the control machine, and does it contain all the JSON files you need?

| username: Hacker_suny | Original post link

There is no JSON file inside, I guess I made a mistake somewhere, planning to rebuild it.

| username: 像风一样的男子 | Original post link

Does the tidb user you used to start TiDB have write permissions for the dashboard_dir /home/tidb/dashboards that you created yourself?

| username: Hacker_suny | Original post link

The directory owner is tidb, so there shouldn’t be any permission issues. However, I will still uninstall and reinstall it to see which one is causing the problem.

| username: buptzhoutian | Original post link

This configuration means that the user chooses not to use the default dashboards provided by tiup, but instead uses their own dashboards.

If you do not have custom dashboards, then you do not need to use this configuration.

| username: Hacker_suny | Original post link

Oh, I see. Okay, I’ll set a default one then.

| username: Hacker_suny | Original post link

Reinstalling with the default settings will work.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.