Note:
This topic has been translated from a Chinese forum by GPT and might contain errors.
Original topic: TiDB on K8S 使用哪种的存储更合适?
[TiDB Usage Environment] Production Environment / Testing / PoC
Production Environment
[TiDB Version]
[Encountered Problem: Problem Phenomenon and Impact]
What is the recommended mounting method for external storage for TiDB on K8S?
[Resource Configuration]
Each node has approximately 1Ti x 10 SSD disks
[TiDB Operator Version]:
[K8s Version]:
Local NVMe manual dog head
Isn’t Ceph Rook the standard configuration for K8S? But this is so complicated…
Is it directly using hostPath
, or using local-volume-provisioner
to create StorageClass
, or something else?
Is your prod environment using Ceph Rook? Is the Ceph Rook cluster mixed with TiDB in the same k8s cluster? Or are TiDB and Ceph Rook in different k8s clusters?
I don’t know how to use Ceph Rook myself, so I used hostPath directly.
Ceph is set up, but I haven’t tested it yet.
How can disk resources be limited when using hostPath?
How can we ensure that the Pod
can still be scheduled to the previous node after a restart when directly using hostPath
? If it cannot be ensured, what is the cost of data synchronization, and has it been evaluated?
NO, play by yourself, no evaluation has been done.
If you want to go into production, consider CEPH ROOK.
Hmm, this seems to have entered a trade-off mode.
TiDB on k8s
hostPath: Solves the issue of automatic scaling on the operations side but cannot solve the resource isolation problem.
local-volume-provisioner: Solves the resource isolation problem but cannot solve automatic scaling (requires manual adjustment of replicas).
ceph: Solves both automatic scaling and resource isolation issues, but the problem is high operational costs and difficulty in troubleshooting storage component issues.
The performance of local disks should be better.
This can only be done by oneself, planned according to practice, and must be tested
There is no once-and-for-all solution.