Can't get bond to work, Partner Mac Address all Zeroes

Issues related to configuring your network
Post Reply
sreeve29
Posts: 7
Joined: 2017/04/20 13:33:45

Can't get bond to work, Partner Mac Address all Zeroes

Post by sreeve29 » 2019/11/07 18:12:29

Any suggestions are welcome.
Connected to an HPE switch.
LACP packets are sent from the switch to the Centos 8 (VM on esxi). Verified in wireshark on the Centos side.

It appears that the LACP packets are silently discarded.
Can't find out where any "lacp log file" might be on Centos.



[root@localhost network-scripts]# cat ifcfg-bond0
DEVICE=bond0
TYPE=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=static
USERCTL=no
ONBOOT=yes
IPADDR=147.34.196.129
NETMASK=255.255.255.192
BONDING_OPTS="mode=4 miimon=100 lacp_rate=slow xmit_hash_policy=layer2+3"

[[root@localhost network-scripts]# cat ifcfg-ens192
TYPE=Ethernet
OTPROTO=none
DEVICE=ens192
ONBOOT=yes
HWADDR="00:50:56:89:e0:93"
MASTER=bond0
SLAVE=yes

[root@localhost network-scripts]# cat ifcfg-ens224
TYPE=Ethernet
BOOTPROTO=none
DEVICE=ens224
ONBOOT=yes
HWADDR="00:50:56:89:d9:f4"
MASTER=bond0
SLAVE=yes


BUT:

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:50:56:89:e0:93
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1
Actor Key: 15
Partner Key: 1
Partner Mac Address: 00:00:00:00:00:00

Slave Interface: ens192
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:50:56:89:e0:93
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 2
details actor lacp pdu:
system priority: 65535
system mac address: 00:50:56:89:e0:93
port key: 15
port priority: 255
port number: 1
port state: 77
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 1

Slave Interface: ens224
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:50:56:89:d9:f4
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: 00:50:56:89:e0:93
port key: 15
port priority: 255
port number: 2
port state: 69
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 1


[root@localhost network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:50:56:89:e1:74 brd ff:ff:ff:ff:ff:ff
inet 10.100.28.70/25 brd 10.100.28.127 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe89:e174/64 scope link
valid_lft forever preferred_lft forever
3: ens192: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 00:50:56:89:e0:93 brd ff:ff:ff:ff:ff:ff
4: ens224: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 00:50:56:89:e0:93 brd ff:ff:ff:ff:ff:ff
6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:c0:81:39 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
7: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:c0:81:39 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:50:56:89:e0:93 brd ff:ff:ff:ff:ff:ff
inet 147.34.196.129/26 brd 147.34.196.191 scope global noprefixroute bond0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe89:e093/64 scope link
valid_lft forever preferred_lft forever

User avatar
jlehtone
Posts: 4523
Joined: 2007/12/11 08:17:33
Location: Finland

Re: Can't get bond to work, Partner Mac Address all Zeroes

Post by jlehtone » 2019/11/07 21:16:22

You show content of ifcfg-files. I presume you are not using NetworkManager? I would try with NM and not just a 'bond', but also 'team'.


You have a VM. Is the libvirtd.service running in it on purpose (you want VM's inside VM), or simply because installation starts it by default? (Should have no effect on the bond.)

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Can't get bond to work, Partner Mac Address all Zeroes

Post by TrevorH » 2019/11/07 23:34:47

You're trying to do this in a VM? Are the ethernet cards physically attached to the VM?
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

sreeve29
Posts: 7
Joined: 2017/04/20 13:33:45

Re: Can't get bond to work, Partner Mac Address all Zeroes

Post by sreeve29 » 2019/11/08 13:37:37

Network Manager seems to be running fine:

[root@localhost ~]# chkconfig NetworkManager on
Note: Forwarding request to 'systemctl enable NetworkManager.service'.
[root@localhost ~]# service NetworkManager status
Redirecting to /bin/systemctl status NetworkManager.service
â NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-11-07 14:11:47 EST; 18h ago
Docs: man:NetworkManager(8)
Main PID: 863 (NetworkManager)
Tasks: 3 (limit: 26213)
Memory: 12.2M
CGroup: /system.slice/NetworkManager.service
ââ863 /usr/sbin/NetworkManager --no-daemon

Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.1306] device (ens192): Activation: successful, device activated.
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.1962] device (bond0): carrier: link connected
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.2774] device (ens224): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.2846] device (bond0): enslaved bond slave ens224
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.2847] device (ens224): Activation: connection 'System ens224' enslaved, continuing activation
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.2850] device (ens224): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.2866] device (ens224): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.2869] device (ens224): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Nov 07 14:16:15 localhost.localdomain NetworkManager[863]: <info> [1573154175.3097] device (ens224): Activation: successful, device activated.
Nov 08 08:26:48 localhost.localdomain NetworkManager[863]: <info> [1573219608.9924] agent-manager: req[0x55da42278190, :1.286/org.gnome.Shell.NetworkAgent/1000]: agent registered

Is there a GUI that I can use to create a bond? Is NetworkManager a GUI? I have access to the Centos console via vCenter and can run anything.

Yes, It's in a VM. The VM is configured properly to use the two virtual switches presented by vCenter.
The switch (HPE 12900) sends the LACP out on each interface and the LACP is received by each interface in Centos.
Centos seems to do nothing with the LACP.

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Can't get bond to work, Partner Mac Address all Zeroes

Post by TrevorH » 2019/11/08 14:27:19

Yes, It's in a VM. The VM is configured properly to use the two virtual switches presented by vCenter.
To stand a chance of getting this to work I am pretty sure that you cannot use virtual interfaces. Those would go down into VMWare and it'll treat them separately and not know anything about bonding. I suspect your only chance of making this work is to actually dedicate the physical network interfaces to the VM (thus making them unavailable to all other guests).

If your goal is really to set up bonding properly then I think you should be doing this on the host machine and not on the guests.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

sreeve29
Posts: 7
Joined: 2017/04/20 13:33:45

Re: Can't get bond to work, Partner Mac Address all Zeroes

Post by sreeve29 » 2019/11/08 14:58:18

Coincidentally, I do have an interest in getting lacp to work between esxi and an HPE switch.
That is another project I'm working on at the moment.

But for this task, I'll take your advice and install a clean Centos8 onto a DL360 server and take it from there. No VMs.
Hope Centos8 can install on a gen6.

Thanks.

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Can't get bond to work, Partner Mac Address all Zeroes

Post by TrevorH » 2019/11/08 15:06:02

No idea about 8 but I have run CentOS 7 on a G6. Earlier than G6 had unsupported RAID controllers but not sure if 8 has further deprecated those. Don't think so - modinfo hpsa still lists 103c:3243 as supported.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

BShT
Posts: 584
Joined: 2019/10/09 12:31:40

Re: Can't get bond to work, Partner Mac Address all Zeroes

Post by BShT » 2019/11/08 17:23:53

my VMware vSphere Essentials Plus Kit can balance NICs, not bond...

Post Reply