I have a server with installed CentOS 7, and 4 10Gb/sec interfaces. When I implement teaming using teamd conf files:
Main Teaming Interface
ifcfg-team0
Code: Select all
DEVICE=team0
DEVICETYPE=Team
ONBOOT=yes
BOOTPROTO=none
MTU=9000
TEAM_CONFIG='{"runner": {"name": "lacp","active": true, "fast_rate": true, "tx_hash": ["eth","ipv4","ipv6","l4"]}, "tx_balancer": {"name": "basic"}, "link_watch": {"name": "ethtool"}}'
ifcfg-team1
Code: Select all
HWADDR=XX:XX:XX:XX:XX:XX
DEVICE=enp5s0f0
DEVICETYPE=TeamPort
ONBOOT=yes
TEAM_MASTER=team0
TEAM_PORT_CONFIG='{"prio": 100}'
MTU=9000
Code: Select all
HWADDR=XX:XX:XX:XX:XX:XX
DEVICE=enp5s0f1
DEVICETYPE=TeamPort
ONBOOT=yes
TEAM_MASTER=team0
TEAM_PORT_CONFIG='{"prio": 100}'
MTU=9000
The figures are from the switch that the server is connected to and are taken during the testing of different hash parameters:
PORT 01
PORT 02
PORT 03
PORT 04
Explanation:
1. Interval 05:00-05:10 hash was ["eth","ipv4","ipv6"] as you can see load balancing is very bad (one interface is at 1500Mb/sec, others are at 200-500Mb/sec).
2. Interval 05:20-05:25 hash was ["l3","l4"] all trafic goes to port 1 !!! Other ports are practically empty !!!
3. Interval 05:30-05:35 hash was ["ipv4","l4"] all trafic just changes port that is overloaded and goes to port 3 !!! Other ports are practically empty !!!
4. Interval 05:50-06:10 hash was ["eth","ipv4","ipv6","l4" ] we come back to the situation that the load balancing is very bad (one interface is at 1500Mb/sec, others are at 200-500Mb/sec).
What is most puzzling is the fact that incoming trafic to that server gets balanced almost perfectly by our switch that has hash parameters set as "L3+L4" (blue line on all 4 images)
Conclusion:
Every has combination that i tested (this is not the only test I did) produces the same results. Load balancing does not work unless one of the parameters is "etc" (MAC address) and when the "etc" is in the hash it seems that the teamd daemon ignores all other parameters and does only load balancing according to MAC address. This is further proved by the fact that the balancing turns from bad-to-worse when outgoing ling of our network starts to increase in outbound traffic to a single mac address in the evening and night (pictures below).
Outgoing trafic to a remote host:
"Overloaded" interface:
"Other" Interface:
Considering that we have extremely big outgoing traffic from this server (it shows as inbound on pictures because measurement is from a switch) to a single MAC address (over 10Gb/sec in peek traffic) "only-MAC" load balancing does not work for me.
Is it possible that the other parameters (ip, ipv6, l3, l4) get ignored and teamd activates only MAC balancing ?
After some time (in the evening for example when traffic in network reaches 10GB/sec) 90% of the traffic goes to the "overloaded port" (9,8 Gb/sec in comparison to 1-2Gb/sec on other ports) so this is the major problem for our network environment !!!
Question:
Have I made some error in configuration or is this the problem with a way CentOS 7 does load balancing with ports in teaming ?