Вы находитесь на странице: 1из 15

How do I configure the bonding device on

Red Hat Enterprise Linux?

• Introduction on page 1
Configuring bonded devices on Red Hat Enterprise Linux 5 on page 2
• Single bonded device on RHEL5 on page 2
• Multiple bonded device on RHEL5 on page 3
Configuring bonded devices on Red Hat Enterprise Linux 4 on page 5
• Single bonded device on RHEL4 on page 5
• Multiple bonded device on RHEL4 on page 7
Configuring bonded devices on Red Hat Enterprise Linux 3 on page 8
• Single bonded device on RHEL3 on page 8
• Multiple bonded device on RHEL3 on page 9
• Configuring bonded devices on Red Hat Enterprise Linux 2.1 on page 10
Bonding modes on Red Hat Enterprise Linux on page 10
Red Hat Enterprise Linux 5 on page 10
• Balance-rr (mode 0) on page 10
• Active-backup (mode 1) on page 10
• Balance-xor (mode 2) on page 11
• Broadcast (mode 3) on page 11
• 802.3ad (mode 4) on page 11
• Balance-tlb (mode 5) on page 12
• Balance-alb (mode 6) on page 12
• Red Hat Enterprise Linux 4 on page 13
Red Hat Enterprise Linux 3 on page 13
• Balance-rr (Mode 0) on page 13
• Active-backup (Mode 1) on page 13
• Balance-xor (Mode 2) on page 14
Bonding parameters in general on page 14
• General bonding parameters on page 14
• ARP monitoring parameters on page 14
• MII monitoring parameters on page 15

Introduction
Bonding (or channel bonding) is a technology enabled by the Linux kernal and Red Hat
Enterprise Linux, that allows administrators to combine two or more network interfaces
to form a single, logical "bonded" interface for redundancy or increased throughput. The
behavior of the bonded interfaces depends upon the mode; generally speaking, modes

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
1
How do I configure the bonding device on Red Hat Enterprise Linux?

provide either hot standby or load balancing services. Additionally, they may provide link-
integrity monitoring.

This article describes the configuration methods of bonding on Red Hat Enterprise Linux 3,
Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5.

Configuring bonded devices on Red Hat Enterprise Linux 5

Single bonded device on RHEL5


For the detailed manual of bonding configuration on RHEL5, please refer to,

• Deployment Guide - Channel Bonding Interfaces


• Deployment Guide - The Channel Bonding Module

To configure the bond0 device with the network interface eth0 and eth1, perform the
following steps:

1. Add the following line to /etc/modprobe.conf:

alias bond0 bonding


2. Create the channel bonding interface file ifcfg-bond0 in the /etc/sysconfig/network-
scripts/ directory:

# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.50.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=0 miimon=100"

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
2
How do I configure the bonding device on Red Hat Enterprise Linux?

Note:

• Configure the bonding parameters in the file /etc/sysconfig/network-scripts/ifcfg-bond0, as above,


BONDING_OPTS="mode=0 miimon=100".
• The behavior of the bonded interfaces depends upon the mode. The mode 0 is the default value, which
causes bonding to set all slaves of an active-backup bond to the same MAC address at enslavement
time. For more information about the bonding modes, refer to The bonding modes supported in Red Hat
Enterprise Linux.

3. Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-eth0.


Both eth0 and eth1 should look like the following example:

DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

Note:

• Replace <N> with the numerical value for the interface, such as 0 and 1 in this example. Replace the
HWADDR value with the MAC for the interface.
• Red Hat suggest that configure the MAC address of the ethernet card into the file /etc/sysconfig/network-
scripts/ifcfg-eth<N>.

4. Restart the network service:

# service network restart


5. In order to check the bonding status, check the following file:

# cat /proc/net/bonding/bond0

Multiple bonded device on RHEL5


In Red Hat Enterprise Linux 5.3 (or update to initscripts-8.45.25-1.el5) and later, configuring
multiple bonding channels is similar to configuring a single bonding channel. Setup the ifcfg-
bond<N> and ifcfg-eth<X> files as if there were only one bonding channel. You can specify
different BONDING_OPTS for different bonding channels so that they can have different

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
3
How do I configure the bonding device on Red Hat Enterprise Linux?

modes and other settings. Refer to the section 14.2.3. Channel Bonding Interfaces in the
Red Hat Enterprise Linux 5 Deployment Guide for more information.

To configure the bond0 device with the ethernet interface eth0 and eth1, and configure the
bond1 device with the Ethernet interface eth2 and eth3, perform the following steps:

1. Add the following line to /etc/modprobe.conf:

alias bond0 bonding


alias bond1 bonding
2. Create the channel bonding interface files ifcfg-bond0 and ifcfg-bond1, in the /etc/
sysconfig/network-scripts/ directory:

# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.50.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=0 miimon=100"
# cat /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
IPADDR=192.168.30.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=1 miimon=50"

Note: there are different bonding modes for bond0 and bond1. For the bond0 device, it is
the active-backup policy (mode=0). For the bond1 device, it is the fail_over_mac policy
(mode=1). More information about the bonding modes please refer to The bonding modes
supported in Red Hat Enterprise Linux

3. Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-eth0.


Both eth0 and eth1 should look like the following example:

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
4
How do I configure the bonding device on Red Hat Enterprise Linux?

DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

Note:

• Replace <N> with the numerical value for the interface, such as 0 and 1 in this example. Replace the
HWADDR value with the MAC for the interface.
• Red Hat suggest that configure the MAC address of the ethernet card into the file /etc/sysconfig/network-
scripts/ifcfg-eth<N>.

4. Restart the network service:

# service network restart


5. In order to check the bonding status, check the following file:

# cat /proc/net/bonding/bond0

Configuring bonded devices on Red Hat Enterprise Linux 4

Single bonded device on RHEL4


For a detailed manual for bonding configuration on RHEL4 , please refer to,

• Section 8.2.3 "Channel Bonding Interface"


• Section 22.5.2. "The Channel Bonding Module"

To configure the bond0 device with the network interface eth0 and eth1, perform the
following steps,

1. Add the following line to /etc/modprobe.conf,

alias bond0 bonding


options bonding mode=1 miimon=100
Note:

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
5
How do I configure the bonding device on Red Hat Enterprise Linux?

• Configure the bonding parameters in the file /etc/modprobe.conf. It is different from the configuration of
RHEL5. The configuration on RHEL5 you configure all bonding parameters in the ifcfg-bond<x> by passing
them in the BONDING_OPTS= variable, while in RHEL4 you need to pass those in the modprobe.conf
using 'install' syntax.
• For the mode=1, it is the fail_over_mac policy mode. More information about the bonding modes please
refer to The bonding modes supported in Red Hat Enterprise Linux

2. Create the channel bonding interface file in the /etc/sysconfig/network-scripts/ directory,


ifcfg-bond0

# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.50.111
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
3. Configure the ethernet interface in the file /etc/sysconfig/network-scripts/ifcfg-eth<N>. In
this example, both eth0 and eth1 should look like this:

DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no

Note:

• Replace the <N> with the numerical value for the interface, such as 0 and 1 in this example. Replace the
HWADDR value with the MAC for the interface.
• Red Hat suggest that you configure the MAC address of the ethernet card into the file /etc/sysconfig/
network-scripts/ifcfg-eth<N>.
• The "54:52:00:26:90:fc" is the hardware address (MAC) of the Ethernet Card in the system.

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
6
How do I configure the bonding device on Red Hat Enterprise Linux?

Multiple bonded device on RHEL4


To configure multiple bonding channels on RHEL4, first set up the ifcfg-bond<N> and ifcfg-
eth<X> files as you would for a single bonding channel, shown in the previous section.

Configuring multiple channels requires a different setup for /etc/modprobe.conf. If the two
bonding channels have the same bonding options, such as bonding mode, monitoring
frequency and so on, add the b option. For example:

alias bond0 bonding


alias bond1 bonding
options bonding max_bonds=2 mode=balance-rr miimon=100

If the two bonding channels have different bonding options (for example, one is using round-
robin mode and one is using active-backup mode), the bonding modules have to load twice
with different options. For example, in /etc/modprobe.conf:

install bond0 /sbin/modprobe --ignore-install bonding -o bonding0 mode=0 miimon=100 primary=eth0


install bond1 /sbin/modprobe --ignore-install bonding -o bonding1 mode=1 miimon=50 primary=eth2

If there are more bonding channels, add one install bond<N> /sbin/modprobe --ignore-install
bonding -o bonding<N> options line per bonding channel.

Note: The use of -o bondingX to get different options for multiple bonds was not possible in
Red Hat Enterprise Linux 4 GA and 4 Update 1.

After the file /etc/modprobe.conf is modified, restart the network service:

# service network restart

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
7
How do I configure the bonding device on Red Hat Enterprise Linux?

Configuring bonded devices on Red Hat Enterprise Linux 3

Single bonded device on RHEL3


For a detailed manual on bonding configuration in RHEL3, please refer to,

• Section A.3.2. The Channel Bonding Module


• Section 8.2.3 Channel Bonding Interfaces

To configure the bond0 device with the network interfaces eth0 and eth1, perform the
following steps:

1. Add the following lines to /etc/modules.conf:

alias bond<N> bonding


options bonding mode=1 miimon=100
Note: Replace the <N> with the numerical value for the interface, such as 0 in this example.

2. Create the channel bonding interface file ifcfg-bond0 in the /etc/sysconfig/network-


scripts/ directory:

# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETMASK=255.255.255.0
IPADDR=192.168.122.225
USERCTL=no
3. Configure the ethernet interface /etc/sysconfig/network-scripts/ifcfg-eth<N>. Both eth0
and eth1 should look like the following example:

# cat /etc/sysconfig/network-scripts/ifcfg-eth<N>
DEVICE=eth<N>
BOOTPROTO=none
HWADDR=54:52:00:26:90:fc
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
8
How do I configure the bonding device on Red Hat Enterprise Linux?

USERCTL=no

Note:

• Replace the <N> with the numerical value for the interface, such as 0 and 1 in this example. Replace the
HWADDR value with the MAC for the interface.
• Red Hat suggest that you configure the MAC address of the ethernet card into the file /etc/sysconfig/
network-scripts/ifcfg-eth<N>.

4. Restart the network service:

# service network restart

Multiple bonded device on RHEL3


To configure multiple bonding channels on RHEL3, first set up the ifcfg-bond<N> and ifcfg-
eth<X> files as you would for a single bonding channel, shown in the previous section.

Configuring multiple channels requires a different setup for /etc/modprobe.conf. If the two
bonding channels have different bonding options (for example, one is using round-robin
mode and one is using active-backup mode), the bonding modules have to load twice with
different options. For example, in /etc/modules.conf:

alias bond0 bonding


options bond0 -o bond0 mode=1 miimon=100
alias bond1 bonding
options bond1 -o bond1 mode=0 miimon=50

After the file /etc/modules.conf is modified, restart the network service:

# service network restart

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
9
How do I configure the bonding device on Red Hat Enterprise Linux?

Configuring bonded devices on Red Hat Enterprise Linux 2.1


Red Hat does not provide the bonding support on Red Hat Enterprise Linux 2.1. Red Hat
Enterprise Linux 2.1 is out of the Red Hat Enterprise Linux lifecycle. For more information
refer to http://www.redhat.com/security/updates/errata/

Bonding modes on Red Hat Enterprise Linux


For information on the bonding modes supported in Red Hat Enterprise Linux, please refer
to the kernel document /usr/share/doc/kernel-doc-{version}/Documentation/networking/
bonding.txt
To read the kernel document, you will need to install the RPM package kernel-doc-
version.rpm

Red Hat Enterprise Linux 5

Balance-rr (mode 0)

Round-robin policy: transmits packets in sequential order from the first available slave
through the last.

• This mode provides load balancing and fault tolerance.

Active-backup (mode 1)

Active-backup policy: only one slave in the bond is active. A different slave becomes active
only if the active slave fails. The bond's MAC address is externally visible on only one port
(network adapter) to avoid confusing the switch.

In bonding version 2.6.2 or later, when a failover occurs in active-backup mode, bonding will
issue one or more gratuitous ARPs on the newly active slave. One gratuitous ARP is issued
for the bonding master interface and each VLAN interface configured above it, assuming
that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN
interfaces are tagged with the appropriate VLAN id.

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
10
How do I configure the bonding device on Red Hat Enterprise Linux?

• This mode provides fault tolerance.


• The primary option, affects the behavior of this mode.

Balance-xor (mode 2)

XOR policy: transmits based on the selected transmit hash policy. The default policy is a
simple ((source MAC address XORd with destination MAC address) modulo slave count)
. Alternate transmit policies may be selected via the xmit_hash_policy option, described
below.

• This mode provides load balancing and fault tolerance.

Broadcast (mode 3)

Broadcast policy: transmits everything on all slave interfaces.

• This mode provides fault tolerance.

802.3ad (mode 4)

IEEE 802.3ad dynamic link aggregation: this mode creates aggregation groups that share
the same speed and duplex settings, and uses all slaves in the active aggregator according
to the 802.3ad specification. Slave selection for outgoing traffic is done according to
the transmit hash policy, which may be changed from the default simple XOR policy via
the xmit_hash_policy option, documented below. Note that not all transmit policies may
be 802.3ad compliant, particularly with regard to the packet misordering requirements
described in section 43.2.4 of the 802.3ad standard. Differing peer implementations will have
varying tolerances for noncompliance.

Prerequisites:

• ethtool support in the base drivers for retrieving the speed and duplex of each slave
• a switch that supports IEEE 802.3ad Dynamic link aggregation.
• Most switches will require some type of configuration to enable 802.3ad mode.

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
11
How do I configure the bonding device on Red Hat Enterprise Linux?

Balance-tlb (mode 5)

Adaptive transmit load balancing: channel bonding that does not require any special switch
support. The outgoing traffic is distributed according to the current load (computed relative
to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving
slave fails, another slave takes over the MAC address of the failed receiving slave.

Prerequisite:

• ethtool support in the base drivers for retrieving the speed of each slave.

Balance-alb (mode 6)

Adaptive load balancing: includes balance-tlb and receive load balancing (rlb) for IPv4 traffic,
and does not require any special switch support. The receive load balancing is achieved by
ARP negotiation. The bonding driver intercepts the ARP replies sent by the local system on
their way out and overwrites the source hardware address with the unique hardware address
of one of the slaves in the bond, such that different peers use different hardware addresses
for the server.

Receive traffic from connections created by the server is also balanced. When the local
system sends an ARP request the bonding driver copies and saves the peer's IP information
from the ARP packet. When the ARP reply arrives from the peer, its hardware address is
retrieved and the bonding driver initiates an ARP reply to this peer assigning it to one of
the slaves in the bond. A problematic outcome of using ARP negotiation for balancing is
that each time that an ARP request is broadcast it uses the hardware address of the bond.
Hence, peers learn the hardware address of the bond and the balancing of receive traffic
collapses to the current slave. This is handled by sending updates (ARP Replies) to all the
peers with their individually assigned hardware address such that the traffic is redistributed.
Receive traffic is also redistributed when a new slave is added to the bond and when an
inactive slave is reactivated. The receive load is distributed sequentially (round-robin) among
the group of highest-speed slaves in the bond.

When a link is reconnected or a new slave joins the bond the receive traffic is redistributed
among all active slaves in the bond by initiating ARP Replies with the selected MAC address

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
12
How do I configure the bonding device on Red Hat Enterprise Linux?

to each of the clients. The updelay parameter (detailed below) must be set to a value equal
or greater than the switch's forwarding delay so that the ARP Replies sent to the peers will
not be blocked by the switch.

Prerequisites:

• ethtool support in the base drivers for retrieving the speed of each slave
• base driver support for setting the hardware address of a device while it is open. This is required so that
there will always be one slave in the team using the bond hardware address (the curr_active_slave) while
having a unique hardware address for each slave in the bond. If the curr_active_slave fails its hardware
address is swapped with the new curr_active_slave that was chosen.

Red Hat Enterprise Linux 4


The bonding modes supported on RHEL4 is the same with the bonding modes on
RHEL5. The detail please refer to the kernel documents, /usr/share/doc/kernel-doc-2.6.9/
Documentation/networking/bonding.txt

Red Hat Enterprise Linux 3

Balance-rr (Mode 0)

Round-robin policy: transmits packets in sequential order from the first available slave
through the last.

• This mode provides load balancing and fault tolerance.

Active-backup (Mode 1)

Active-backup policy: only one slave in the bond is active. A different slave becomes active
if, and only if, the active slave fails. The bond's MAC address is externally visible on only one
port (network adapter) to avoid confusing the switch.

In bonding version 2.6.2 or later, when a failover occurs in active-backup mode, bonding will
issue one or more gratuitous ARPs on the newly active slave. One gratuitous ARP is issued
for the bonding master interface and each VLAN interfaces configured above it, provided
that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN
interfaces are tagged with the appropriate VLAN id.

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
13
How do I configure the bonding device on Red Hat Enterprise Linux?

• This mode provides fault tolerance.


• The primary option, affects the behavior of this mode.

Balance-xor (Mode 2)

XOR policy: transmits based on the selected transmit hash policy. The default policy is a
simple ((source MAC address XORd with destination MAC address) modulo slave count)
. Alternate transmit policies may be selected via the xmit_hash_policy option, described
below.

• This mode provides load balancing and fault tolerance.

Bonding parameters in general


It is critical that either the miimon or arp_interval and arp_ip_target parameters be specified,
otherwise serious network degradation will occur during link failures. Very few devices do
not support at least miimon, so it should always be used.

General bonding parameters

max_bonds: specifies the number of bonding devices to create for this instance of the
bonding driver. For example, if max_bonds is 3, and the bonding driver is not already
loaded, then bond0, bond1 and bond2 will be created. The default value is 1.

ARP monitoring parameters


• arp_interval: specifies the ARP link monitoring frequency in milliseconds.
• arp_ip_target: specifies the IP addresses to use as ARP-monitoring peers when arp_interval is > 0.
Multiple IP addresses must be separated by a comma. At least one IP address must be given for ARP
monitoring to function. The maximum number of targets that can be specified is 16.
arp_validate: specifies whether or not ARP probes and replies should be validated in the active-backup
mode. This causes the ARP monitor to examine the incoming ARP requests and replies, and only consider
a slave to be up if it is receiving the appropriate ARP traffic. This parameter can have the following
values:
• none (0). This is the default.
• active (1). Validation is performed only for the active slave.
• backup (2). Validation is performed only for backup slaves.

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
14
How do I configure the bonding device on Red Hat Enterprise Linux?

• all (3). Validation is performed for all slaves.

For the active slave, the validation checks ARP replies to confirm that they were generated
by an arp_ip_target. Since backup slaves do not typically receive these replies, the
validation performed for backup slaves is on the ARP request sent out via the active slave.
It is possible that some switch or network configurations may result in situations wherein
the backup slaves do not receive the ARP requests; in such a situation, validation of backup
slaves must be disabled.

MII monitoring parameters


• miimon: specifies the MII link monitoring frequency in milliseconds. This determines how often the link
state of each slave is inspected for link failures. A value of 0 disables MII link monitoring. A value of 100
is a good starting point. The use_carrier option, listed below, affects how the link state is determined. The
default value is 0.
• updelay: specifies the time, in milliseconds, to wait before enabling a slave after a link recovery has been
detected.
• downdelay: specifies the time, in milliseconds, to wait before disabling a slave after a link failure has been
detected.
• use_carrier: specifies whether or not miimon should use MII/ETHTOOL ioctls or netif_carrier_ok() to
determine the link status. A value of 1 enables the use of netif_carrier_ok() (faster, better, but not always
supported), a value of 0 will use the deprecated MII/ETHTOOL ioctls. The default value is 1.

Copyright (c) 2009 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, v1.0 or later (available at http://www.opencontent.org/openpub/).
15

Вам также может понравиться