network card

Issues related to configuring your network
Post Reply
good_face
Posts: 70
Joined: 2019/10/15 13:29:09

network card

Post by good_face » 2021/01/27 08:27:20

How can I see the list of ethernet cards on the server. And how do you know if these ethernet cards are up or down ?
I guess the bond has been made on the server. I want to see which ethernet is seen as passive or active. I'm asking because I don't have much information. I will be glad if you help

MartinR
Posts: 714
Joined: 2015/05/11 07:53:27
Location: UK

Re: network card

Post by MartinR » 2021/01/27 09:10:46

Read ip(8). Typically use ip link to check if links are up/down and ip address to see, well, addresses!

Redacted and abbreviated examples:

Code: Select all

bash-4.2$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br3 state UP mode DEFAULT group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
3: enp4s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
4: br3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    ...
bash-4.2$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br3 state UP group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1234:5678:9abc:def0/64 scope link
       valid_lft forever preferred_lft forever
3: enp4s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/24 brd 1.2.3.255 scope global enp4s0
       valid_lft forever preferred_lft forever
4: br3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/24 brd 1.2.3.255 scope global br3
       valid_lft forever preferred_lft forever
    inet6 fe80::1234:5678:9abc:def0/64 scope link 
       valid_lft forever preferred_lft forever
    ...

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: network card

Post by TrevorH » 2021/01/27 10:27:15

And since you mention "bond", look in /proc/net/bonding/bond0 (etc) for the status.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

User avatar
jlehtone
Posts: 4523
Joined: 2007/12/11 08:17:33
Location: Finland

Re: network card

Post by jlehtone » 2021/01/27 10:34:59

Network cards are (usually) shown as PCI devices. Thus,

Code: Select all

lspci | grep -i net
If you do have NetworkManager.service in use (which is the default), then you can query with:

Code: Select all

nmcli
nmcli d s
nmcli d show
nmcli c s

PS. One "card" can contain multiple physical ports and one physical port can have multiple "partitions" and virtual functions (SR-IOV VF).

Furthermore, some connection types (like bond) create virtual interface devices. It is typical, but not compulsory, to name the first bond "bond0".

good_face
Posts: 70
Joined: 2019/10/15 13:29:09

Re: network card

Post by good_face » 2021/01/28 10:49:19

I am asking because I am not fully entitled to the subject. I see one is down. what could be the reason for this down. Is there a problem with the cable connected to the switch? Can I do this up with a command. I want to ask one more thing. why is this port seen down. I think it is a physical cable or a physical problem. what is your opinion. What should I check? My questions may be a little awkward as I don't know exactly why I am seeing down but I know you can help me with what to check.. thanks all

[root@xxxx ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode D EFAULT group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
3: eno6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
4: eno7: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
5: eno8: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP mode DEFAULT group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
7: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff
[root@xxxx ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno6
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eno6
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: XX:XX:XX:XX:XX:XX
Slave queue ID: 0
[root@xxxx~]# lspci | grep -i net
5d:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
5d:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
5d:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
5d:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
[root@xxxx~]# nmcli
Error: NetworkManager is not running.

User avatar
jlehtone
Posts: 4523
Joined: 2007/12/11 08:17:33
Location: Finland

Re: network card

Post by jlehtone » 2021/01/28 11:09:45

You have one "card" that has four ports. You have two bonds.
The "bond0" has at least one port (eno6) enslaved to it.
The "bond1" has at least two ports (eno[78]) enslaved to it.

There is most likely file /etc/sysconfig/network-scripts/ifcfg-eno5 that shows how the eno5 is configured.

Port eno5 states NO-CARRIER.
A cable might be broken or not properly connected (it does not take much to be "loose").
Alternatively, the port or the port in the other device might be broken.
Since there does not seem to be physical connection, the interface remains DOWN.


PS. The ports of I350 are for copper Ethernet.
Some cards have SFP/SFP+ slot that takes a tranceiver (or "DAC"). Tranceivers have either copper Ethernet or fiber "port". A card might not support all tranceiver models. That too yields the NO-CARRIER.
As said, that cannot be the issue in your server.

good_face
Posts: 70
Joined: 2019/10/15 13:29:09

Re: network card

Post by good_face » 2021/01/28 11:20:54

Thank you very much for the detailed information. thanks again

Post Reply