Bond 5 Speed Issue

Issues related to configuring your network
Post Reply
tech0925
Posts: 9
Joined: 2019/03/13 20:51:27

Bond 5 Speed Issue

Post by tech0925 » 2019/03/13 21:01:50

Hi all,

I ran some speed test and I was only getting about 183.90 Mbit/s. I have Cetnos 7 and I'm using a bond 5 setup on two nics. Can anyone tell what I did wrong here?

Code: Select all

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: enp7s0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp7s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: xxx
Slave queue ID: 0

Slave Interface: enp10s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: xxx
Slave queue ID: 0

Code: Select all

ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet xx.xx.xxx.225  netmask 255.255.xxx.0  broadcast xx.xx.xxx.255
        inet6 xxxx:xxxx:xx:xxxx:xxxx:xxxx:xxxx:82ff  prefixlen 64  scopeid 0x0<global>
        inet6 xxxx::xxxx:xxxx:xxxx:82ff  prefixlen 64  scopeid 0x20<link>
        ether xxx  txqueuelen 1000  (Ethernet)
        RX packets 80431  bytes 10532048 (10.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 27771  bytes 1777900 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp10s0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether xxx  txqueuelen 1000  (Ethernet)
        RX packets 116078  bytes 14807817 (14.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 161083  bytes 10309794 (9.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 19  memory 0xdf640000-df660000  

enp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet xx.xx.xxx.226  netmask 255.255.254.0  broadcast xx.xx.xxx.226
        inet6  xxxx:xxxx:xx:xxxx:xxxx:xxxx:xxxx:8301  prefixlen 64  scopeid 0x0<global>
        inet6 xxxx::xxxx:xxxx:xxxx:8301  prefixlen 64  scopeid 0x20<link>
        ether xxx  txqueuelen 1000  (Ethernet)
        RX packets 1913781  bytes 1564494178 (1.4 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1436042  bytes 1575542289 (1.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xdfc40000-dfc60000  

enp7s0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether xxx  txqueuelen 1000  (Ethernet)
        RX packets 254660  bytes 65000027 (61.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 161291  bytes 10329396 (9.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17  memory 0xdfa40000-dfa60000  

enp9s0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether xxx  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 18  memory 0xdf840000-df860000  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4535  bytes 571584 (558.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4535  bytes 571584 (558.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Code: Select all

# ethtool enp7s0
Settings for enp7s0:
	Supported ports: [ TP ]
	Supported link modes:   10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Full 
	Supported pause frame use: No
	Supports auto-negotiation: Yes
	Supported FEC modes: Not reported
	Advertised link modes:  10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Full 
	Advertised pause frame use: No
	Advertised auto-negotiation: Yes
	Advertised FEC modes: Not reported
	Speed: 1000Mb/s
	Duplex: Full
	Port: Twisted Pair
	PHYAD: 1
	Transceiver: internal
	Auto-negotiation: on
	MDI-X: on (auto)
	Supports Wake-on: pumbg
	Wake-on: g
	Current message level: 0x00000007 (7)
			       drv probe link
	Link detected: yes

Code: Select all

# ethtool enp10s0
Settings for enp10s0:
	Supported ports: [ TP ]
	Supported link modes:   10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Full 
	Supported pause frame use: No
	Supports auto-negotiation: Yes
	Supported FEC modes: Not reported
	Advertised link modes:  10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	                        1000baseT/Full 
	Advertised pause frame use: No
	Advertised auto-negotiation: Yes
	Advertised FEC modes: Not reported
	Speed: 1000Mb/s
	Duplex: Full
	Port: Twisted Pair
	PHYAD: 1
	Transceiver: internal
	Auto-negotiation: on
	MDI-X: on (auto)
	Supports Wake-on: pumbg
	Wake-on: g
	Current message level: 0x00000007 (7)
			       drv probe link
	Link detected: yes

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Bond 5 Speed Issue

Post by TrevorH » 2019/03/13 21:41:09

Most likely, in common with almost all the other bonding methods, it's throughput limited if your traffic is all to one destination ip/port/mac.
balance-tlb or 5

Adaptive transmit load balancing: channel bonding that
does not require any special switch support.

In tlb_dynamic_lb=1 mode; the outgoing traffic is
distributed according to the current load (computed
relative to the speed) on each slave.

In tlb_dynamic_lb=0 mode; the load balancing based on
current load is disabled and the load is distributed
only using the hash distribution.

Incoming traffic is received by the current slave.
If the receiving slave fails, another slave takes over
the MAC address of the failed receiving slave.

Prerequisite:

Ethtool support in the base drivers for retrieving the
speed of each slave.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

tech0925
Posts: 9
Joined: 2019/03/13 20:51:27

Re: Bond 5 Speed Issue

Post by tech0925 » 2019/03/13 22:08:18

Thank you! Do you suggest another bonding type?

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Bond 5 Speed Issue

Post by TrevorH » 2019/03/13 23:32:44

It depends on your goals and traffic pattern. If this machine is to serve lots of multiple different clients then the issue you're running into during testing would probably go away. The throughput is limited when the traffic is all going to one place, when it goes to multiple places then each one could potentially use a different link.

There is a big discussion in the section "12. Configuring Bonding for Maximum Throughput" in /usr/share/doc/kernel-doc-3.10.0/Documentation/networking/bonding.txt which is part of the kernel-doc package. You can also alter how the traffic is distributed by changing the hashing mechanism.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

tech0925
Posts: 9
Joined: 2019/03/13 20:51:27

Re: Bond 5 Speed Issue

Post by tech0925 » 2019/03/13 23:36:21

I see, thank you for your help! It is a web server which will serve traffic to websites so it sounds like I am in a good place based on what you mentioned.

Post Reply