Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: 机械盘服务器上虚拟机的透明大页和硬盘策略?
[TiDB Usage Environment] Production Environment
[TiDB Version] 5.1
[Reproduction Path] According to the documentation TiDB Environment and System Configuration Check | PingCAP Documentation Center, tuned the transparent huge pages and disk policy.
[Encountered Problem: Write performance decreased. The original Spark JDBC write to the same table with data fluctuation not exceeding 1% increased from 6 hours to 10 hours. No data jitter observed. Other factors ruled out.]
[Resource Configuration]
[Attachment: Screenshot/Log/Monitoring]
Did you disable transparent huge pages and set the I/O scheduler of the storage medium to noop? Setting the I/O scheduler to noop doesn’t make much sense for mechanical drives, right?
I have adjusted deadline, noop, and cfq, but found no difference. However, I noticed that the write performance has not returned. No matter which scheduler is used, the write time has increased from 6 hours to 10 hours.
Hello,
It is recommended to use solid-state drives (SSDs) as data disks for TiDB. Mechanical disks have significant performance pressure.
I originally migrated from a certain cloud’s SSD to offline. The benefits have shrunk, and the costs are no longer feasible, so I’m using second-hand hardware to cope.
Sigh, cost. But can we ignore whether the business can operate or not?