Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: TiDB Dashboard在查询TopSQL时候报错,报错内容是" when searching tsids: the number of matching unique timeseries exceeds 300000"
TiDB Dashboard reports an error when querying TopSQL, with the following error message:
API: /topsql/summary
{"status":"error","errorType":"422","error":"error when executing query=\"sum_over_time(sql_exec_count{instance=\\\"172.18.243.39:20160\\\", instance_type=\\\"tikv\\\"}[52s])\" for (time=1705993080000, step=300000): cannot evaluate \"sum_over_time(sql_exec_count{instance=\\\"172.18.243.39:20160\\\", instance_type=\\\"tikv\\\"}[52s])\": search error after reading 0 data blocks: error when searching for tagFilters=[{__name__=\"sql_exec_count\", instance=\"172.18.243.39:20160\", instance_type=\"tikv\"}] on the time range [2024-01-23 06:48:00 +0000 UTC - 2024-01-23 06:58:00 +0000 UTC]: error when searching tsids: the number of matching unique timeseries exceeds 300000; either narrow down the search or increase -search.maxUniqueTimeseries"}
After using df
to check the data volume, I found that there is a total of 17G of data, which is indeed much larger than other clusters. Is it because the data volume is too large that causes this issue?
1.6G docdb
17G tsdb
641M wal
After manually clearing the tsdb
directory, it was indeed possible to query, but the error reappeared the next day, so I suspect it’s due to the data volume.
The error message indicates: error when searching tsids: the number of matching unique timeseries exceeds 300000; either narrow down the search or increase -search.maxUniqueTimeseries
Is there a way to increase -search.maxUniqueTimeseries
through parameters?
The time series exceeded.
After upgrading to 7.5, two new parameters were added, but currently, the configuration must be manually modified and cannot be configured through TiUP.
The modification method is as follows:
- Copy the ngmonitoring configuration to a new file. (The original configuration file will be overwritten every time Prometheus is reloaded)
cd /data/tidb-deploy/prometheus-9090
cp conf/ngmonitoring.toml conf/ngmonitoring-new.toml
- Add two tsdb configuration items in
conf/ngmonitoring-new.toml
. Here, I have shortened the tsdb retention period and increased the value of search-max-unique-timeseries.
[tsdb]
# Data with timestamps outside the retentionPeriod is automatically deleted
# The following optional suffixes are supported: h (hour), d (day), w (week), y (year).
# If suffix isn't set, then the duration is counted in months.
retention-period = "7d"
# `search-max-unique-timeseries` limits the number of unique time series a single query can find and process.
# VictoriaMetrics(tsdb) keeps in memory some metainformation about the time series located by each query
# and spends some CPU time for processing the found time series. This means that the maximum memory usage
# and CPU usage a single query can use is proportional to `search-max-unique-timeseries`.
search-max-unique-timeseries = 9000000
- Modify
scripts/ng-wrapper.sh
to use the new configuration file. (Files in the scripts directory will only be overwritten during tiup cluster upgrade
, and regular scale-out/scale-in/reload
operations will not overwrite them)
#!/bin/bash
# WARNING: This file was auto-generated to restart ng-monitoring when fail.
# Do not edit! All your edit might be overwritten!
while true
do
bin/ng-monitoring-server --config /home/tidb/tidb-deploy/prometheus-9090/conf/ngmonitoring-new.toml >/dev/null 2>&1
sleep 15s
done
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.