Hi,
When I try to enslave two VF's passed through to a VM as a bonded network device, it always fails for the second device.
[root@localhost ~]# ifenslave bond0 m0 m1
[ 3857.028054] bond0: Enslaving m0 as a backup interface with a down link
[ 3857.037916] ixgbevf 0000:00:08.0: NIC Link is Up 1 Gbps
Master 'bond0', Slave 'm1': Error: Enslave failed
[root@localhost ~]# [ 3857.088905] bond0: link status definitely up for interface m0, 1000 Mbps full duplex
[ 3857.093483] bond0: making interface m0 the new active one
[ 3857.099445] bond0: first active interface up!
[ 3857.102622] IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
strace -f ifenslave bond0 eth1
...
ioctl(3, SIOCETHTOOL, 0x7fff7ad12c60) = 0
ioctl(3, SIOCGIFMTU, {ifr_name="bond0", ifr_mtu=1500}) = 0
ioctl(3, SIOCGIFFLAGS, {ifr_name="bond0", ifr_flags=IFF_UP|IFF_BROADCAST|IFF_RUNNING|IFF_MASTER|IFF_MULTICAST}) = 0
ioctl(3, SIOCGIFHWADDR, {ifr_name="bond0", ifr_hwaddr={sa_family=ARPHRD_ETHER, sa_data=54:01:02:80:00:00}}) = 0
ioctl(3, SIOCGIFMTU, {ifr_name="eth1", ifr_mtu=1500}) = 0
ioctl(3, SIOCGIFFLAGS, {ifr_name="eth1", ifr_flags=IFF_BROADCAST|IFF_MULTICAST}) = 0
ioctl(3, SIOCGIFHWADDR, {ifr_name="eth1", ifr_hwaddr={sa_family=ARPHRD_ETHER, sa_data=54:01:02:80:00:01}}) = 0
ioctl(3, SIOCSIFFLAGS, {ifr_name="eth1", ifr_flags=IFF_BROADCAST|IFF_MULTICAST}) = 0
ioctl(3, SIOCSIFADDR, {ifr_name="eth1", ifr_addr={sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("0.0.0.0")}}) = 0
ioctl(3, SIOCBONDENSLAVE, 0x7fff7ad12c90) = -1 EPERM (Operation not permitted)
ioctl(3, SIOCDEVPRIVATE, 0x7fff7ad12c90) = -1 EPERM (Operation not permitted)
ioctl(3, SIOCSIFHWADDR, {ifr_name="bond0", ifr_hwaddr={sa_family=ARPHRD_ETHER, sa_data=54:01:02:80:00:00}}) = 0
ioctl(3, SIOCSIFMTU, {ifr_name="eth1", ifr_mtu=1500}) = 0
write(2, "Master 'bond0', Slave 'eth1': Erro"..., 50Master 'bond0', Slave 'eth1': Error: Enslave failed
If I do eth1 first and eth0 second, eth0 fails.
# cat /etc/modprobe.d/bond0.conf
alias bond0 bonding
options bond0 mode=active-backup miimon=100
The virtualisation host is CentOS 7.8 with kernel 3.10.0-1127 and the guest is CentOS 7.9.
Is this a bug in KVM?
A known limitation with bonding and KVM?
The reason for using VF's rather than virtual networking is for performance.
Cannot add second slave to a bond inside of KVM
-
- Posts: 1
- Joined: 2021/03/02 05:54:46