Sunday 27 January 2013

Bonding Multiple Network Interface Cards (NICs)




Linux network Bonding is creation of a single bonded interface by combining 2 or more Ethernet interfaces. This helps in high availability of your network interface and offers performance improvement. Bonding is same as port trunking or teaming.

Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed

Bonding NICs can help us achieve high network availability even if one of the network card falls down. Usually, bonding NICs are implement for high-speed no interruption service like SAN storage. Do not think that bonding will multiply the data transmit. To multiply data transmit, we need to do multipathing.

1. Make sure we have the utilities to bond NICs. Run following command to install:
yum install ethtool -y
2. Create a new bond NIC called bond0. This will be our bonding master interface, where our real NIC will be use as slave interface. Whenever a slave NIC down, another slave NIC will take over so the master interface will be not affected:
touch /etc/sysconfig/network-scripts/ifcfg-bond0
vi  /etc/sysconfig/network-scripts/ifcfg-bond0
And add following line:
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.0.100
NETMASK=255.255.255.0
NETWORK=192.168.0.0
USERCTL=no
BOOTPROTO=no 
BONDING_OPTS="mode=1"

3. Now we need to change some values on every physical NICs configuration files that we have. In this case, I will need to change following files under /etc/sysconfig/network-scripts directory:
ifcfg-eth0:
DEVICE=eth0
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=no
ifcfg-eth1:
DEVICE=eth1
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=no
ifcfg-eth2:
DEVICE=eth2
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=no
ifcfg-eth3:
DEVICE=eth3
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=no
4. Now we need to register the bonding module inside CentOS as a device. We need to create a new files called bonding.conf under /etc/modprobe.d directory:
touch /etc/modprobe.d/bonding.conf
vi /etc/modprobe.d/bonding.conf
Add following line:
alias bond0 bonding
options bond0 mode=5 miimon=100
5. Add the module to kernel:
modprobe bonding
6. Restart network:
service network restart
Done! You will noticed that another interface is added into interface list (ifconfig) called bond0. You can monitor the bonding state by running following command:
watch -n1 'cat /proc/net/bonding/bond0'
When the machine boots up check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:72:80: 62:f0

Look at ifconfig -a and check that your bond0 interface is active. You are done!. For more details on the different modes of bonding, please refer to unixfoo’s modes of bonding.
To verify whether the failover bonding works..
  • Do an ifdown eth0 and check /proc/net/bonding/bond0 and check the “Current Active slave”.
  • Do a continuous ping to the bond0 ipaddress from a different machine and do a ifdown the active interface. The ping should not break.


RHEL bonding supports 7 possible "modes" for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.
  • Mode0-(balance-rr)
    This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.
  • Mode1-(active-backup)
    This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.
  • Mode2-(balance-xor)
    Transmits based on XOR formula. (Source MAC address is XOR'd with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.
  • Mode3-(broadcast)
    This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.
  • Mode4-(802.3ad)
    This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.
  • Mode5-(balance-tlb)
    This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.
  • Mode6-(balance-alb)
    This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.




No comments:

Post a Comment