Topics: Networking, Red Hat / Linux

Setting up a bonded network interface on RHEL 7

The following procedure describes how to set up a bonded network interface on Red Hat Enterprise Linux. It assumes that you already have a working single network interface, and wish to now move the system to a bonded network interface set-up, to allow for network redundancy, for example by connecting two separate network interfaces, preferably on two different network cards in the server, to two different network switches. This provides both redundancy, should a network card in the server fail, and also if a network switch would fail.
First, log in as user root on the console of the server, as we are going to change the current network configuration to a bonded network configuration, and while doing so, the system will lose network connectivity temporarilty, so it is best to work from the console.

In this procedure, We'll be using network interfaces em1 and p3p1, on two different cards, to get card redundancy (just in case one of the network cards will fail).

Let's assume that IP address 172.29.126.213 is currently configured on network interface em1. You can verify that, by running:

# ip a s
Also, we'll need to verify, using the ethtool command, that there is indeed a good link status on both the em1 and p3p1 network interfaces:
# ethtool em1
# ethtool p3p1
Run, to list the bonding module info (should be enabled by default already, so this is just to verify):
# modinfo bonding
Create copies of the current network files, just for safe-keeping:
# cd /etc/sysconfig/network-scripts
# cp ifcfg-em1 /tmp
# cp ifcfg-p3p1 /tmp
Now, create a new file ifcfg-bond0 in /etc/sysconfig/network-scripts. We'll configure the IP address of the system (the one that was configured previously on network interface em1) on a new bonded network interface, called bond0. Make sure to update the file with the correct IP address, gateway and network mask for your environment:
# cat ifcfg-bond0
DEVICE=bond0
TYPE=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.29.126.213
NETMASK=255.255.255.0
GATEWAY=172.29.126.1
BONDING_OPTS="mode=5 miimon=100"
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
The next thing to do is to create two more files, one for each network interfaces that will be the slaves of the bonded network interface. In our example, that will be em1 and p3p1.

Create file /etc/sysconfig/network-scripts/ifcfg-em1 (be sure to update the file to your environment, for example, use the correct UUID. You may find that in the copies you've made of the previous network interface files. In this file, you'll also specify that the bond0 interface is now the master.
# cat ifcfg-em1
TYPE=Ethernet
BOOTPROTO=none
NAME=em1
UUID=cab24cdf-793e-4aa7-a093-50bf013910db
DEVICE=em1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Create file ifcfg-p3p1:
# cat ifcfg-p3p1
TYPE=Ethernet
BOOTPROTO=none
NAME=p3p1
UUID=5017c829-2a57-4626-8c0b-65e807326dc0
DEVICE=p3p1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Now, we're ready to start using the new bonded network interface. Restart the network service:
# systemctl restart network.service
Run the ip command to check the current network config:
# ip a s
The IP address should now be configured on the bond0 interface.

Ping the default gateway, to test if your bonded network interface can reach the switch. In our example, the default gateway is set to 172.29.126.1:
# ping 172.29.126.1
This should work. If not, re-trace the steps you've done so far, or work with your network team to identify the issue.

Check that both interfaces of the bonded interface are up, and what the current active network interface is. You can do this by looking at file /proc/net/bonding/bond0. In this file you can see what the currently active slave is, and if all slaves of the bonded network interface are up. For example:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: p3p1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: p3p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:ce:26:30
Slave queue ID: 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
In the example above, the active network interface is p3p1. Let's bring it down, to see if it fails over to network interface em1. You can bring down a network interface using the ifdown command:
# ifdown p3p1
Device 'p3p1' successfully disconnected.
Again, look at the /proc/net/bonding/bond0 file. You can now see that the active network interface has changed to em1, and that network interface p3p1 is no longer listed in the file (because it is down):
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: em1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
Now ping the default gateway again, and make sure it still works (now that we're using network interface em1 instead of network interface p3p1).

Then bring the p3p1 interface back up, using the ifup command:
# ifup p3p1
And check the bonding status again:
# cat /proc/net/bonding/bond0
It should show that the active network interface is still on em1, it will not fail back to network interface p3p1 (After all, why would it?! Network interface em1 works just fine).

Now repeat the same test, by bringing down network interface em1, ping the default gateway again, and check the bonding status, and bring em1 back up:
# ifdown em1
# cat /proc/net/bonding/bond0
# ping 172.29.126.1
# ifup em1
# cat /proc/net/bonding/bond0
# ping 172.29.126.1
If this all works fine, then you're all set.



If you found this useful, here's more on the same topic(s) in our blog:


UNIX Health Check delivers software to scan Linux and AIX systems for potential issues. Run our software on your system, and receive a report in just a few minutes. UNIX Health Check is an automated check list. It will report on perfomance, capacity, stability and security issues. It will alert on configurations that can be improved per best practices, or items that should be improved per audit guidelines. A report will be generated in the format you wish, and the report includes the issues discovered and information on how to solve the issues as well.

Interested in learning more?