NFS mount at boot with defaults?

Issues related to configuring your network
Post Reply
johmut
Posts: 11
Joined: 2020/04/22 09:02:04

NFS mount at boot with defaults?

Post by johmut » 2020/08/06 09:08:17

Hi,

Running CentOS 8.2.2004, need to mount an NFSv4 export on a Synology NAS: # mount synology_ip:/exported-nfs-share /mnt/
NFS share is correctly mounted as type nfs4 with all defaults: (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=<localhost_ip>,local_lock=none,addr=<synology_ip>)

I'm concerned about the 'hard' option. If I put this in my /etc/fstab: <synology_ip>:/exported-nfs-share /mnt nfs defaults
Will my system hang at boot if the NFS server/share is unavailable for some reason ? Should I rather specify intr or soft instead of hard ?

One-more-Thinghy: the NFS share mounts with nobody:nobody permissions ? Client root can touch a file though ?
What's the best squash strategy on the NFS server (Synology) side ? Currently it is set to none...

TIA,
joh
Last edited by johmut on 2020/08/06 09:23:11, edited 1 time in total.

User avatar
TrevorH
Site Admin
Posts: 33216
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: NFS mount at boot with defaults ?

Post by TrevorH » 2020/08/06 09:20:54

Running man nfs has this:

Code: Select all

       soft / hard    Determines  the  recovery behavior of the NFS client after an NFS request times out.  If neither option is specified
                      (or if the hard option is specified), NFS requests are retried indefinitely.  If the soft option is specified,  then
                      the  NFS  client fails an NFS request after retrans retransmissions have been sent, causing the NFS client to return
                      an error to the calling application.

                      NB: A so-called "soft" timeout can cause silent data corruption in certain cases. As such, use the soft option  only
                      when client responsiveness is more important than data integrity.  Using NFS over TCP or increasing the value of the
                      retrans option may mitigate some of the risks of using the soft option.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

johmut
Posts: 11
Joined: 2020/04/22 09:02:04

Re: NFS mount at boot with defaults ?

Post by johmut » 2020/08/06 09:25:35

Thanks, I read man nfs ...
but as 'soft' is discouraged, my question remains: will 'hard' hang my system at boot if the share is ever unavailable ?

User avatar
TrevorH
Site Admin
Posts: 33216
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: NFS mount at boot with defaults ?

Post by TrevorH » 2020/08/06 09:29:01

Personally I try never to use network based filesystems from /etc/fstab for precisely those reasons. I'd recommend using autofs to mount on demand, preferably with a timeout so that it gets umounted when not in use and remounted when next required. That avoids most network based hangs though it does have some overhead. For me the slight overhead is worth it to avoid hung mounts.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

User avatar
jlehtone
Posts: 4530
Joined: 2007/12/11 08:17:33
Location: Finland

Re: NFS mount at boot with defaults ?

Post by jlehtone » 2020/08/06 11:55:07

I do recommend autofs too.

Although, the systemd does have automount-feature too. Option in fstab enables it:

Code: Select all

<synology_ip>:/exported-nfs-share  /mnt  nfs  defaults,noauto,nofail,x-systemd.automount 0 0
Note, I would not mount directly on /mnt. I would mount to /mnt/nas1.
That way I don't need different directory (than /mnt) for the other NAS, etc.

The last time I did check, the systemd.automount does not have idle timeout. It will not unmount
unused filesystems like autofs does. It does delay the mount to first access though.
johmut wrote:
2020/08/06 09:08:17
One-more-Thinghy: the NFS share mounts with nobody:nobody permissions ?
The nobody:nobody is owner:group. Related to permissions, but not quite.

You have name mapping. NFS 3 did send just uid and gid numbers to server.
NFS 4 (primarily) sends names (and domain) and the server maps them to numbers.
See man idmapd.conf for what "domain" the client sends.

Synology probably has its ways to define "users", i.e. mappings.
(I have a NetApp, where there are local, LDAP, AD, and both posix and windows mappings.)

The root_squash and all_squash export flags make the server present files with squashed uid/gid
when client is root (or anybody). (NetApp has "admin access" checkbox in GUI to enable no_root_squash.)

In other words: if you see "nobody", then you are either looking as root and root_squash is set,
or your name does not map to number so you get squashed.
Or you are truly nobody.

BShT
Posts: 585
Joined: 2019/10/09 12:31:40

Re: NFS mount at boot with defaults ?

Post by BShT » 2020/08/14 19:51:05

i use soft NFS but it is a mostly a read share that supply a document root for an apache farm inside a vmware virtual switch

network issues are very rare

Post Reply