Configuring dual 10 gigabit network cards

Application environment:

PoC

TiDB version: TiDB-v7.4.0

Reproduction method:

Problem:

The Hardware Requirements page says “10 Gigabit network card (2 preferred)”, but I don’t see any documentation on how to configure the two network cards to be sure they’re both working and being utilized.

Only one IP is specified for each “- host:” declaration in the .yml file. How is the 2nd IP configured for that node?

Resource allocation:

Nodes have 12 vCPU, 128 GB RAM, 4 TB SSD

Attachment:

To configure and utilize dual network cards effectively in a TiDB cluster, follow these steps and best practices:

1. Network Bonding/Link Aggregation

Network bonding or link aggregation combines multiple network interfaces into a single logical interface, providing redundancy and increased bandwidth. The most common method is using Linux’s bonding driver.

Steps to Configure Network Bonding:

  1. Install the necessary packages:

    sudo apt-get install ifenslave
    
  2. Load the bonding module:

    sudo modprobe bonding
    
  3. Configure the network interfaces:
    Edit the network configuration file (e.g., /etc/network/interfaces on Debian-based systems or /etc/sysconfig/network-scripts/ifcfg-bond0 on Red Hat-based systems).

    Example for Debian-based systems:

    auto bond0
    iface bond0 inet static
        address 192.168.1.100
        netmask 255.255.255.0
        gateway 192.168.1.1
        bond-slaves eth0 eth1
        bond-mode 802.3ad
        bond-miimon 100
        bond-downdelay 200
        bond-updelay 200
    
    auto eth0
    iface eth0 inet manual
        bond-master bond0
    
    auto eth1
    iface eth1 inet manual
        bond-master bond0
    

    Example for Red Hat-based systems:

    DEVICE=bond0
    IPADDR=192.168.1.100
    NETMASK=255.255.255.0
    GATEWAY=192.168.1.1
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    BONDING_OPTS="mode=802.3ad miimon=100"
    
    DEVICE=eth0
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    
    DEVICE=eth1
    USERCTL=no
    BOOTPROTO=none
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    
  4. Restart the network service:

    sudo systemctl restart networking  # For Debian-based systems
    sudo systemctl restart network     # For Red Hat-based systems
    

2. Configuring the Second IP in .yml File

When using network bonding, you typically only need to specify one IP address in your configuration files because the bonded interface will handle traffic across both network cards. However, if you need to configure a second IP for specific purposes, you can add an alias to the bonded interface.

Example:

auto bond0:1
iface bond0:1 inet static
    address 192.168.1.101
    netmask 255.255.255.0

3. Best Practices

  • Ensure Consistent Configuration: Make sure all nodes in the cluster have consistent network configurations to avoid any network-related issues.
  • Monitor Network Performance: Use tools like iftop, nload, or network monitoring solutions to ensure both network interfaces are being utilized effectively.
  • Redundancy and Failover: Network bonding provides redundancy. If one network card fails, the other can take over, ensuring continuous network availability.

4. Documentation and Further Reading

For more detailed instructions and best practices, refer to the official documentation of your Linux distribution and the TiDB documentation on network configuration:

By following these steps, you can ensure that both network cards are configured correctly and utilized effectively in your TiDB cluster.

: Software and Hardware Recommendations | PingCAP Docs
: Tune Operating System Performance | PingCAP Docs