NUMA Issues in Mixed Deployment of TiKV Nodes

This topic has been translated from a Chinese forum by GPT and might contain errors.

Original topic: tikv节点混合部署numa问题

| username: 啦啦啦啦啦

[TiDB Usage Environment] Production Environment
[Resource Configuration] 72C 512G 4*1.6T
[Encountered Problem: Problem Phenomenon and Impact]
We are preparing to purchase hardware and want to avoid wasting resources as much as possible. We plan to start with 5 machines for KV, with 4 TiKV nodes deployed on each machine. As a novice, I don’t understand NUMA very well. Could the experts advise whether NUMA binding is necessary?

| username: tidb菜鸟一只 | Original post link

If you deploy 4 TiKV nodes on one machine, you definitely need to do NUMA binding; otherwise, resource contention will prevent achieving ideal performance. First, use numactl --hardware to check how many NUMA nodes the machine supports.
available: 2 nodes (0-1)—this indicates support for 2 nodes.
If it supports 4 nodes, bind each TiKV to a separate NUMA node.
Additionally, it is recommended to mount 4 different storage devices on the machine and then mount 4 separate directories, assigning each to one of the 4 TiKV nodes.

| username: BraveChen | Original post link

With 5 nodes, each deploying 4 TiKV components, I personally feel this is not very good. Generally, deploying 1 to 2 TiKV components per machine is more appropriate.

| username: 啦啦啦啦啦 | Original post link

Yes, doing it this way will definitely have some impact on performance and security, but having one TiKV per machine would be a serious waste of resources. If we could use the cloud, resource allocation could be more reasonable, but currently, the conditions do not allow it.

| username: BraveChen | Original post link

I suggest you use just two TiKV nodes.

| username: 啦啦啦啦啦 | Original post link

Okay, I’ll reflect on it.

| username: 啦啦啦啦啦 | Original post link

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.