How to Improve Performance Using TiDB 4.0?

Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: 使用tidb 4.0版本,如何提升性能?

| username: TiDBer_y9IRzLWc

[TiDB Usage Environment] Production Environment / Testing / PoC
[TiDB Version]
[Reproduction Path] What operations were performed when the issue occurred
[Encountered Issue: Issue Phenomenon and Impact]
[Resource Configuration] Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page
[Attachments: Screenshots / Logs / Monitoring]

There is currently a performance issue with the online TiDB service.

We are currently using TiDB version 4.0.9.

The TiKV configuration consists of 5 servers, each with 32 cores, 64GB of RAM, and NVMe SSDs.

Currently, PD and TiDB-server are configured on 3 servers, each with 48 cores and 128GB of RAM.

There are 5 TiKV nodes, each with 15TB of disk space, with around 3TB used and 11-12TB remaining.

Currently, 90 business servers are writing data to the TiDB cluster, supporting a maximum data write rate of 11MB per second. Could anyone suggest ways to improve concurrent data insertion performance and optimization solutions given the current version and hardware configuration?

| username: cy6301567 | Original post link

Check the official documentation: TiDB 软件和硬件环境建议配置 | PingCAP 文档中心

TiKV and TiFlash require higher CPU and memory.

| username: 像风一样的男子 | Original post link

Are 2 KV nodes using double replication? That’s not safe. It’s better to have 3 nodes with triple replication.

| username: Billmay表妹 | Original post link

“Go to TiDB Dashboard - Cluster Info - Hosts and take a screenshot of this page!” Take a screenshot of this page!

| username: Billmay表妹 | Original post link

Let’s look at the documentation first. The official documentation has recommended configurations, so let’s adjust according to those configurations.

| username: h5n1 | Original post link

  1. The first thing to do is to add one more TiKV server to ensure data safety; otherwise, if one fails, it will be disastrous.
  2. Upgrade the version.
  3. Configure multiple TiKV instances within the TiKV server based on resource conditions, and bind them to NUMA nodes.
  4. Check if there are any SQL optimizations needed, hotspots, and the resource pressure on TiKV/TiDB CPUs.
| username: tidb菜鸟一只 | Original post link

There are only two TiKV nodes, but three TiDB servers? And PD is shared with TiKV?
Do you have solid-state storage on your current TiDB server machines? I suggest you scale down one TiDB server, then expand it into a PD, and then scale down the two shared PDs on TiKV, expanding them onto the TiDB servers…

| username: Billmay表妹 | Original post link

| username: Billmay表妹 | Original post link

This feature looks perfect for you!

| username: system | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.