Consultation on TiDB Performance Issues

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: TIDB性能问题请教

| username: 新手村村民

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version] 6.5
[Encountered Issue: Problem Description and Impact]
The company plans to migrate from MySQL to TiDB and is currently in the research phase. The deployment method is based on the official documentation, with the following configuration:
image


The virtual machine hardware configuration is 2 CPUs, 16GB memory, and 500GB hard disk.

Currently, the database (size 25GB) has been migrated to TiDB via DM.
However, even for simple select queries, the performance is slower than MySQL.
There is not a single instance where it is faster than MySQL.


Please advise, thank you!

| username: 胡杨树旁 | Original post link

Please post the SQL and execution plan.

| username: 新手村村民 | Original post link

SELECT
  ddi.id
FROM
  dw_declare_info ddi
WHERE
  ddi.del_flag = '0'
  AND NOW() < ddi.end_time
  AND ddi.audit_status != 0
  AND ddi.audit_status != 3
  AND ddi.audit_time BETWEEN '2023-05-02 00:00:00.0'
  AND '2023-05-03 00:00:00.0'
  AND ddi.identity NOT IN (
    SELECT
      black_value
    FROM
      black_info_new
    WHERE
      exist_or_not = 0
  )
ORDER BY
  ddi.update_time DESC
LIMIT
  0, 10;

||id                         |task     |estRows|operator info                                                                                                                                                                                                                                                                                                                                                             |actRows|execution info                                                                                                                                                                                                                                                                                          |memory   |disk|
|---|---|---|---|---|---|---|---|---|
||Projection_12              |root     |10     |shihua_center.dw_declare_info.id                                                                                                                                                                                                                                                                                                                                          |10     |time:1.73s, loops:2, Concurrency:OFF                                                                                                                                                                                                                                                                    |760 Bytes|N/A|
||└─TopN_15                  |root     |10     |shihua_center.dw_declare_info.update_time:desc, offset:0, count:10                                                                                                                                                                                                                                                                                                        |10     |time:1.73s, loops:2                                                                                                                                                                                                                                                                                     |15.4 KB  |N/A|
||  └─HashJoin_20            |root     |53.38  |CARTESIAN anti semi join, other cond:eq(shihua_center.dw_declare_info.identity, shihua_center.black_info_new.black_value)                                                                                                                                                                                                                                                 |561    |time:1.73s, loops:2, build_hash_table:{total:70.3ms, fetch:69.4ms, build:875µs}, probe:{concurrency:5, total:4.68s, max:1.73s, probe:996.4ms, fetch:3.68s}                                                                                                                                             |505.7 KB |0 Bytes|
||    ├─TableReader_30(Build)|root     |8332   |data:Selection_29                                                                                                                                                                                                                                                                                                                                                         |8332   |time:69.5ms, loops:10, cop_task: {num: 6, max: 18ms, min: 1.81ms, avg: 11.6ms, p95: 18ms, max_proc_keys: 480, p95_proc_keys: 480, rpc_num: 6, rpc_time: 69.7ms, copr_cache_hit_ratio: 0.83, distsql_concurrency: 15}                                                                                    |120.8 KB |N/A|
||    │ └─Selection_29       |cop[tikv]|8332   |eq(shihua_center.black_info_new.exist_or_not, 0)                                                                                                                                                                                                                                                                                                                          |8332   |tikv_task:{proc max:84ms, min:0s, avg: 26.2ms, p80:38ms, p95:84ms, iters:34, tasks:6}, scan_detail: {total_process_keys: 480, total_process_keys_size: 84068, total_keys: 481, get_snapshot_time: 78.7µs, rocksdb: {key_skipped_count: 480, block: {cache_hit_count: 11}}}                             |N/A      |N/A|
||    └─TableReader_23(Probe)|root     |66.72  |data:Selection_22                                                                                                                                                                                                                                                                                                                                                         |561    |time:736.5ms, loops:2, cop_task: {num: 14, max: 735.8ms, min: 5.76ms, avg: 394.4ms, p95: 735.8ms, max_proc_keys: 303626, p95_proc_keys: 303626, tot_proc: 5.41s, tot_wait: 1ms, rpc_num: 14, rpc_time: 5.52s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}                                      |39.1 KB  |N/A|
||      └─Selection_22       |cop[tikv]|66.72  |eq(shihua_center.dw_declare_info.del_flag, 0), ge(shihua_center.dw_declare_info.audit_time, 2023-05-02 00:00:00.000000), le(shihua_center.dw_declare_info.audit_time, 2023-05-03 00:00:00.000000), lt(2023-05-24 10:33:26, shihua_center.dw_declare_info.end_time), ne(shihua_center.dw_declare_info.audit_status, 0), ne(shihua_center.dw_declare_info.audit_status, 3)|561    |tikv_task:{proc max:734ms, min:4ms, avg: 386.5ms, p80:657ms, p95:734ms, iters:2554, tasks:14}, scan_detail: {total_process_keys: 2552222, total_process_keys_size: 1017786960, total_keys: 2552236, get_snapshot_time: 251.5µs, rocksdb: {key_skipped_count: 4625694, block: {cache_hit_count: 17053}}}|N/A      |N/A|
||        └─TableFullScan_21 |cop[tikv]|2550943|table:ddi, keep order:false                                                                                                                                                                                                                                                                                                                                               |2552222|tikv_task:{proc max:661ms, min:4ms, avg: 340.4ms, p80:561ms, p95:661ms, iters:2554, tasks:14}                                                                                                                                                                                                           |N/A      |N/A|
| username: weixiaobing | Original post link

All the data has been scanned, and the time field index has been added. Besides, your configuration is too low. Deploy according to the official website’s configuration.

| username: 新手村村民 | Original post link

It’s very strange that the query efficiency is much lower than MySQL. The database is directly synchronized from MySQL, and the table design and indexes are the same. However, this query takes only 0.1 seconds to execute in MySQL, but more than 1.7 seconds in TiDB. Does it have anything to do with the server configuration? It is deployed on seven virtual machines allocated from one server.

| username: tidb菜鸟一只 | Original post link

The goal of TiDB’s design is to address scenarios where MySQL’s single-node capacity limitations necessitate sharding, or where strong consistency and fully distributed transactions are required. Its advantage lies in pushing down as much computation as possible to storage nodes for parallel processing. For small tables (e.g., those with less than tens of millions of rows), TiDB is not suitable because the limited data volume and Regions cannot leverage parallelism effectively. The most extreme example is a counter table with a few rows being frequently updated. In TiDB, these few rows become a few KVs in the storage engine, which fall into a single Region, and this Region resides on a single node. Coupled with the overhead of strong consistency replication in the background and the overhead from the TiDB engine to the TiKV engine, the performance ends up being worse than a single MySQL instance.

Additionally, if TiDB is deployed in a mixed environment without resource isolation, its performance will be even worse, far less efficient than deploying a single-node MySQL on the same physical machine.

| username: 新手村村民 | Original post link

I have a new discovery: For this query, if the last sentence LIMIT 0,10 is not added, the query time for both MySQL and TiDB is around 1.7 seconds. If LIMIT 0,10 is added, MySQL only needs 0.1 seconds, but TiDB still takes 1.7 seconds. It’s frustrating, I don’t know why :weary:, help me, experts!

| username: 胡杨树旁 | Original post link

MySQL is a single-node system, so you only need to fetch data from one machine. TiDB, on the other hand, is distributed, meaning each TiKV node needs to fetch data, which is then aggregated on the TiDB server. I’m not sure if my understanding is correct.

| username: 新手村村民 | Original post link

However, each TiKV node operates in parallel.

| username: 新手村村民 | Original post link

However, this difference is a bit too large. In theory, even with a small amount of data, the performance through TiDB should not be much different from MySQL.

| username: tidb菜鸟一只 | Original post link

For MySQL, a SQL query like LIMIT 0,10 simply retrieves the first 10 rows of data. However, for TiDB, the data on each TiKV node is unpredictable, requiring a full return of data to TiDB. TiDB then aggregates the data, sorts it by the corresponding fields, and finally extracts the top 10 rows.

| username: 新手村村民 | Original post link

Whether TiDB is added or not, the time is 1.7s. On the contrary, for MySQL, without adding it, the time is also around 1.7s, but with it, the time becomes 0.1s.
For TiDB, querying all data from the table and querying only 10 rows takes the same amount of time.

| username: zhanggame1 | Original post link

It’s normal for it to be slow; your memory configuration is too small. TiDB consumes much more memory than MySQL. Additionally, a single SQL statement in a cluster is generally slower than in a standalone database.

| username: tidb菜鸟一只 | Original post link

No problem, TiDB’s efficiency in handling this kind of sorting LIMIT 0,10 is indeed not much different from not adding it, but the amount of data returned to your client is different. MySQL directly optimizes the sorting in memory, so it will be a bit faster. However, if the table size increases, for example, to a 10 billion-level table, using a single MySQL instance will definitely not be as good as TiDB with multiple nodes.

| username: 人如其名 | Original post link

In the case of an anti semi join, if the associated field definition allows null, it will be executed internally as a Cartesian product, resulting in lower efficiency. It is recommended to set the associated field to not null and then check again.