Unable to Open TiDB Single-Node Cluster Dashboard Deployed in WSL Environment on Windows 11

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: win11系统wsl环境下部署tidb单机集群Dashboard无法打开

| username: huanglao2002

I can successfully deploy a TiDB single-node cluster in the Win11 WSL environment, and the database can be accessed normally. However, when verifying the Dashboard, it prompts a connection refusal. After logging into the server and checking the listening status, I found the following:

jin@TABLET-PBTEB744:~$ netstat -an | grep 2379 | grep LISTEN
tcp6       0      0 :::2379                 :::*                    LISTEN

From the listening status, it seems there is an issue as it only listens to the IPv6 address.

In the same WSL system, the playground can open the Dashboard normally, and the address listening information is as follows:

jin@TABLET-PBTEB744:~$ netstat -an | grep 2379 | grep LISTEN
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN

The listening address is correct, and the Dashboard can be opened normally.

Does anyone have any suggestions? Could TiDB development experts also take a look?

| username: CuteRay | Original post link

Could you please let me know if you are using Ubuntu with WSL? Was the cluster successfully started after deployment? Also, could you share the configuration file for deploying a single-node TiDB?

| username: huanglao2002 | Original post link

  1. Yes, the cluster started normally. The startup and display information are as follows:
    jin@TABLET-PBTEB744:~$ tiup cluster start tidbtest
    tiup is checking updates for component cluster …
    Starting component cluster: /home/jin/.tiup/components/cluster/v1.11.1/tiup-cluster start tidbtest
    Starting cluster tidbtest…
  • [ Serial ] - SSHKeySet: privateKey=/home/jin/.tiup/storage/cluster/clusters/tidbtest/ssh/id_rsa, publicKey=/home/jin/.tiup/storage/cluster/clusters/tidbtest/ssh/id_rsa.pub
  • [Parallel] - UserSSH: user=tidb, host=127.0.0.1
  • [Parallel] - UserSSH: user=tidb, host=127.0.0.1
  • [Parallel] - UserSSH: user=tidb, host=127.0.0.1
  • [Parallel] - UserSSH: user=tidb, host=127.0.0.1
  • [Parallel] - UserSSH: user=tidb, host=127.0.0.1
  • [ Serial ] - StartCluster
    Starting component pd
    Starting instance 127.0.0.1:2379
    [sudo] password for jin:
    Start instance 127.0.0.1:2379 success
    Starting component tikv
    Starting instance 127.0.0.1:20160
    Start instance 127.0.0.1:20160 success
    Starting component tidb
    Starting instance 127.0.0.1:4000
    Start instance 127.0.0.1:4000 success
    Starting component prometheus
    Starting instance 127.0.0.1:9090
    Start instance 127.0.0.1:9090 success
    Starting component grafana
    Starting instance 127.0.0.1:3000
    Start instance 127.0.0.1:3000 success
    Starting component node_exporter
    Starting instance 127.0.0.1
    Start 127.0.0.1 success
    Starting component blackbox_exporter
    Starting instance 127.0.0.1
    Start 127.0.0.1 success
  • [ Serial ] - UpdateTopology: cluster=tidbtest
    Started cluster tidbtest successfully
    jin@TABLET-PBTEB744:~$ tiup cluster display tidbtest
    tiup is checking updates for component cluster …
    Starting component cluster: /home/jin/.tiup/components/cluster/v1.11.1/tiup-cluster display tidbtest
    Cluster type: tidb
    Cluster name: tidbtest
    Cluster version: v6.5.0
    Deploy user: tidb
    SSH type: none
    Dashboard URL: http://127.0.0.1:2379/dashboard
    Grafana URL: http://127.0.0.1:3000
    ID Role Host Ports OS/Arch Status Data Dir Deploy Dir

127.0.0.1:3000 grafana 127.0.0.1 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
127.0.0.1:2379 pd 127.0.0.1 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
127.0.0.1:9090 prometheus 127.0.0.1 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
127.0.0.1:4000 tidb 127.0.0.1 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
127.0.0.1:20160 tikv 127.0.0.1 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
Total nodes: 5
2. The configuration file is as follows
jin@TABLET-PBTEB744:~$ cat template.yaml

For more information about the format of the tiup cluster topology file, consult

Deploy a TiDB Cluster Using TiUP | PingCAP Docs

# Global variables are applied to all deployments and used as the default value of

# the deployments if a specific deployment value is missing.

global:

# The OS user who runs the tidb cluster.

user: “tidb”

# SSH port of servers in the managed cluster.

ssh_port: 22

# Storage directory for cluster deployment files, startup scripts, and configuration files.

deploy_dir: “/tidb-deploy”

# TiDB Cluster data storage directory

data_dir: “/tidb-data”

# Supported values: “amd64”, “arm64” (default: “amd64”)

arch: “amd64”

pd_servers:

  • host: 127.0.0.1

tidb_servers:

  • host: 127.0.0.1

tikv_servers:

  • host: 127.0.0.1

monitoring_servers:

  • host: 127.0.0.1

grafana_servers:

  • host: 127.0.0.1
| username: CuteRay | Original post link

Looking at the picture, I followed your instructions to deploy a TiDB cluster. Using http://127.0.0.1:2379/dashboard indeed doesn’t allow normal access to the cluster’s dashboard, but using http://localhost:2379/dashboard does. This is actually due to the special nature of WSL2. Unlike WSL1, WSL2 can be considered an independent virtual machine with its own IP address, connected to Windows through a virtual router. Services deployed in WSL2 cannot be accessed directly through 127.0.0.1 in Windows, but can be accessed through localhost. This is likely achieved using some special internal routing technology.

As for why the cluster temporarily brought up by tiup playground can be accessed via 127.0.0.1, I guess it’s because tiup playground by default specifies a --host 127.0.0.1, though this might not be entirely accurate.

| username: huanglao2002 | Original post link

Not sure if it can be brought to the TiDB development team to see where the root cause is.

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.