Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: tikv 个数增加了一倍,但写入性能不能提高
[TiDB Usage Environment] /Test/ Poc
[TiDB Version]
7.1.0
[Reproduction Path]
Initially, there were only three servers with a total of three TiKV nodes. The client had a concurrency of 30, with a write QPS of approximately 1500. Then, an additional disk of the same type was added to each of the original three machines, and TiKV nodes were deployed. The client was then tested again with 30 and 60 concurrent writes respectively, with the expected performance doubling.
[Encountered Issue: Problem Phenomenon and Impact]
The expected performance should have roughly doubled, but the write QPS remained the same as before.
[Resource Configuration]
[Attachments: Screenshots/Logs/Monitoring]
Are you doing the stress test? What tool are you using? What are the steps? Is the primary key of the created table a random primary key?
Whether to upgrade depends on what the original bottleneck is. If it’s not a disk I/O issue, adding TiKV is meaningless.
It’s considered a stress test. I wrote a tool in Golang, with the primary key being a random UUID. It connects to a TiDB, the machine has 6 cores and 16GB of RAM. The TiDB server CPU is at 300%, machine load is 19, and IO utilization is not exceeding 90%.
Originally, iostat showed ioutil was around 95, but nvme is not very accurate either.
I recommend you take a look at this document on how to optimize writes. Simply adding TiKV requires some additional optimizations for a single table.
Now looking at the monitoring, the performance has dropped quite a bit, very strange.
Has the table structure been changed?
If you create the table in the MySQL way, you will definitely encounter write hotspots.
From my personal experience, if there are no write hotspots, adding 1 TiKV to 4 TiKV from the minimum 3 replicas will increase performance by about 1/3.
When DM imports data, you can easily observe the import traffic.
Try using random primary keys to see if there are any hotspots in the monitoring.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.