RHEL6–35–NFS
Which packages are
required to run NFS?
nfs-utils & rpcbind
How to know about the
required NFS packages installed in RHEL system?
[root@rhel6-test1 ~]# rpm -qa rpcbind
rpcbind-0.2.0-11.el6.x86_64
[root@rhel6-test1 ~]# rpm -qa |grep -i nfs
=====Nothing in O/P=====================
No NFS package installed in my system.
[root@rhel6-test1 ~]# yum install -y nfs*
[root@rhel6-test1 ~]# rpm -qa |grep -i nfs
nfs-utils-1.2.3-36.el6.x86_64
nfs4-acl-tools-0.3.3-6.el6.x86_64
nfs-utils-lib-1.1.5-6.el6.x86_64
Name the main config file for NFS in RHEL system?
/etc/exports: All exported files and directories are defined here
at the NFS Server end.
/etc/fstab: To make our NFS mounts persistent we need to make
an entry here.
/etc/sysconfig/nfs: To control on which port rpc
and other services are listening.
A Network File System (NFS) allows remote hosts to mount file systems
over a network and interact with those file systems as though they are mounted
locally.
NFS allows local access of remote share.
Red Hat Enterprise Linux 6 supports NFSv2, NFSv3, and NFSv4 clients. By
default RHEL6 use NFSv4 if the server supports it.
NFSv1
It was the development stage of NFS protocol. It was used only for in
house experimental purpose. When a stable version of NFS was ready, Developers
decided to release it as the new version of NFS known as NFSv2.
NFSv2
Supports only 32 bit.
Only allowed the first 2 GB of a file to be read and operated only
over UDP.
NFSv3
Supports 64 bit file system.
It can handle files larger than 2 GB.
NFSv3 supports asynchronous writes on the server. Asynchronous writes
improve write performance.
NFSv3 supports additional file attributes in many replies, to avoid
the need to re-fetch them.
NFSv3 supports READDIRPLUS operation. READDIRPLUS operation get file
handles and attributes along with file names when scanning a directory.
NFSv3 supports TCP. Using TCP as a transport made NFS over a WAN more
feasible.
NFSv4
NFSv4 retains all NFSv3 advantages.
NFSv4 supports ACLs.
NFSv4 no longer requires an rpcbind service.
NFSv4 uses the virtual file system to present the server's export.
NFSv4 supports Pseudo file system. Pseudo File System provide maximum
flexibility. Exports Pathname on servers can be changed transparently to
clients.
NFSv4 have locking operations as the part of protocol which keep
track of open files and delegations.
NFSv4 works through firewalls and on the Internet.
When mounting a file system via NFS, Red Hat Enterprise Linux uses
NFSv4 by default, if the server supports it.
The mounting and locking protocols have been incorporated into the
NFSv4 protocol. The server also listens on the well-known TCP port 2049. As
such, NFSv4 does not need to interact with rpcbind, lockd, and rpc.statd
daemons. The rpc.mountd daemon is required on the NFS server to set up the
exports.
All NFS versions rely on Remote Procedure Calls (RPC) between clients
and servers. RPC services under Red Hat Enterprise Linux 6 are controlled by
the rpcbind service.
For more details please refer following links,
HOW TO CONFIGURE NFS ON RHEL6?
Server - 192.168.234.146
Client - 192.168.234.200
SERVER SIDE CONFIG:
[root@rhel6-server ~]# yum install nfs* -y
[root@rhel6-server ~]# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Stopping RPC idmapd: [ OK ]
Starting RPC idmapd: [ OK ]
Starting NFS daemon: [ OK ]
[root@rhel6-server ~]# service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 7607) is running...
nfsd (pid 7672 7671 7670 7669 7668 7667 7666 7665) is running...
rpc.rquotad (pid 7603) is running...
[root@rhel6-server ~]# chkconfig --list nfs
nfs 0:off 1:off
2:off 3:off 4:off
5:off 6:off
[root@rhel6-server ~]# chkconfig nfs on
[root@rhel6-server ~]# chkconfig --list nfs
nfs 0:off 1:off
2:on 3:on 4:on
5:on 6:off
[root@rhel6-server ~]# mkdir /nfstest1
[root@rhel6-server ~]# vi /etc/exports
/nfstest1 *(rw,sync,no_root_squash,no_all_squash)
To export all shares listed in /etc/exports
[root@rhel6-server ~]# exportfs –a
[root@rhel6-server ~]# export all
[root@rhel6-server ~]# exportfs -v
/nfstest1
<world>(rw,wdelay,no_root_squash,no_subtree_check)
ro: This option stands for Read Only. Which means the
client has only the permission to read data from the share, but no write
permission.
rw: This options stands for Read and Write. This allows the
client machine to have both read and write permissions on the directory
root_squash: Convert incoming requests from user root to the
anonymous uid and gid.
no_root_squash: This option needs to be understood very
carefully, because it can become a security bottleneck on the server. If the
user "root" on the client machine mounts a particular share from the
server, then by default, the requests made by the root user is fulfilled as a
user called "nobody", instead of root. Which is a plus point as far
as security is concerned, because no root user on the client machine can harm
the server, because the requests are not fulfilled as root but as nobody.
using no_root_squash option will disable this feature and requests
will be performed as root instead of nobody.
root_squash — Prevents root users connected remotely from
having root privileges and assigns them the user ID for the user nfsnobody.
This effectively "squashes" the power of the remote root user to the
lowest local user, preventing unauthorized alteration of files on the remote
server. Alternatively, the no_root_squash option turns off root squashing. To
squash every remote user, including root, use the all_squash option. To specify
the user and group IDs to use with remote users from a particular host, use the
anonuid and anongid options, respectively. In this case, a special user account
can be created for remote NFS users to share and specify (anonuid=,anongid=),
where is the user ID number and is the group ID number.
If you use root_squash, then root's account is mapped to
"nobody" and usually has few permissions to read or write anything.
Read "man exports" on the server for full details.
async: As i have mentioned in the VFS (Virtual File
System) section, each and every request to the nfs server from the client is
first converted to an RPC call and then submitted to the VFS on the server. The
VFS will handle the request to the underlying file system to complete.
Now if you use async option, as soon the request is handled over the
underlying file system to fulfill, the nfs server replies to the client saying
the request has completed. The NFS server does not wait for the complete write
operation to be completed on the underlying physical medium, before replying to
the client.
Although this makes the operation a little faster, but can cause data
corruption. You can say that the nfs server is telling a lie to the client that
the data is written to the disk(What happens if the server gets rebooted at
this time..there is no trace of data..)
sync: The sync option
does the reverse. In this case the NFS server will reply to the client only
after the data is completely written to the underlying medium. This will result
in a slight performance lag.
subtree_check Verify
requested file is in exported tree
This is the default. Every file request is checked to make sure that
the requested file is in an exported subdirectory. If this option is turned
off, the only verification is that the file is in an exported filesystem.
no_subtree_check Negation of subtree_check
Occasionally, subtree checking can produce problems when a requested
file is renamed while the client has the file open. If many such situations are
anticipated, it might be better to set no_subtree_check. One such situation
might be the export of the /home filesystem. Most other situations are best
handed with subtree_check.
no_wdelay Write disk
as soon as possible
NFS has an optimization algorithm that delays disk writes if NFS
deduces a likelihood of a related write request soon arriving. This saves disk
writes and can speed performance.
BUT...
If NFS deduced wrongly, this behavior causes delay in every request,
in which case this delay should be eliminated. That's what the no_wdelay option
does -- it eliminates the delay. In general, no_wdelay is recommended when most
NFS requests are small and unrelated.
Wdelay Negation
of no_wdelay
This is the default.
[root@rhel6-server ~]# exportfs -avf
exporting *:/nfstest1
[root@rhel6-server ~]# service nfs restart
Shutting down NFS daemon: [ OK ]
Shutting down NFS mountd: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [
OK ]
Starting NFS mountd: [ OK ]
Stopping RPC idmapd: [ OK ]
Starting RPC idmapd: [ OK ]
Starting NFS daemon:
[ OK ]
[root@rhel6-server ~]# cat /var/lib/nfs/etab
/nfstest1
*(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,anonuid=65534,anongid=65534)
AT CLIENT:
[root@rhel6-test1 ~]# mkdir /nfstest1
[root@rhel6-test1 ~]# mount -t nfs
192.168.234.146:/nfstest1 /nfstest1
mount: wrong fs type, bad option, bad superblock on
192.168.234.146:/nfstest1,
missing codepage or
helper program, or other error
(for several filesystems
(e.g. nfs, cifs) you might
need a
/sbin/mount.<type> helper program)
In some cases useful
info is found in syslog - try
dmesg | tail or so
[root@rhel6-test1 ~]# service nfs restart
nfs: unrecognized service
[root@rhel6-test1 ~]# yum install nfs* -y
[root@rhel6-test1 ~]# service nfs restart
Shutting
down NFS daemon: [FAILED]
Shutting
down NFS mountd: [FAILED]
Shutting
down NFS quotas: [FAILED]
Starting
NFS services: [ OK ]
Starting
NFS quotas: Cannot register service: RPC: Unable to receive; errno = Co
nnection refused
rpc.rquotad:
unable to register (RQUOTAPROG, RQUOTAVERS, udp).
[FAILED]
Starting
NFS mountd: [FAILED]
Starting
NFS daemon: rpc.nfsd: writing fd to kernel failed: errno 111 (Connectio
n refused)
rpc.nfsd:
unable to set any sockets for nfsd
[FAILED]
[root@rhel6-test1 ~]# showmount -e 192.168.234.146
Export list for 192.168.234.146:
/nfstest1 *
[root@rhel6-test1 ~]# mount -t nfs
192.168.234.146:/nfstest1 /nfstest1
[root@rhel6-test1 ~]# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 16G 6.3G
8.6G 43% /
tmpfs 370M
68K 370M 1% /dev/shm
/dev/sda1 291M 37M
240M 14% /boot
/dev/sdb1 2.0G 85M
1.8G 5% /home
.host:/ 108G 99G
8.5G 93% /mnt/hgfs
192.168.234.146:/nfstest1
16G 2.9G
12G 20% /nfstest1
[root@rhel6-test1 ~]# cd /nfstest1
[root@rhel6-test1 nfstest1]# touch f1
[root@rhel6-test1 nfstest1]# ls -l
total 0
-rw-r--r--. 1 root root 0 May 22 17:20 f1
[root@rhel6-test1 nfstest1]# vi /etc/fstab
192.168.234.146:/nfstest1
/nfstest1 nfs defaults 1
1
It is recommended to add “_netdev” option with “defaults’ at
/etc/fstab.
Why…??
During booting of Linux when process is trying to mount the file
system present in /etc/fstab, at that time NFS mount points present in that
file are not able to mount because the network service was not started at that
time. Hence Redhat recommends us to add _netdev option for NFS file system in
/etc/fstab. So that they can be mounted after starting the networking service.
_netdev
The filesystem resides on a device that requires network access (used
to prevent the system from attempting to mount these filesystems until the
network has been enabled on the system).
The _netdev option doesn’t tell the system to mount the filesystem
when network comes up, it says don’t attempt to mount it at all if the network
isn’t up.
[root@rhel6-test1 ~]# vi /etc/fstab
192.168.234.146:/nfstest1 /nfstest1 nfs
defaults,_netdev 1 1
If still getting problem while mounting NFS file system during boot
process.
We can add the “NETWORKDELAY=60” in below file.
[root@rhel6-test1 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rhel6-test1
NETWORKDELAY=60
One more “culprit” is there to ruin our NFS show.
Called “NETFS”
[root@rhel6-test1 ~]# cd /etc/rc.d/rc3.d
[root@rhel6-test1 rc3.d]# ls -ltr *netfs*
lrwxrwxrwx. 1 root root 15 Aug 30
2016 S25netfs -> ../init.d/netfs
Description: Mounts and unmounts all Network File System (NFS)
For more info on “netfs” please refer
[root@rhel6-test1 ~]# chkconfig netfs on
[root@rhel6-test1 ~]# service netfs status
Configured NFS mountpoints:
/nfstest1
Active NFS mountpoints:
/nfstest1
SOME MORE WITH NFS:
How to know the NFS version used by system?
[root@rhel6-test1 ~]# mount -v | grep /nfstest1
192.168.234.146:/nfstest1 on /nfstest1 type nfs (rw,vers=4,addr=192.168.234.146,clientaddr=192.168.234.200)
[root@rhel6-test1 ~]# nfsstat -m
/nfstest1
from 192.168.234.146:/nfstest1/
Flags:
rw,relatime,vers=4,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,
retrans=2,sec=sys,clientaddr=192.168.234.200,minorversion=0,local_lock=none,addr=192.168.234.146
How to show the exported file system from NFS server?
[root@rhel6-test1 ~]# showmount -e 192.168.234.146
Export list for 192.168.234.146:
/nfstest1 *
How to show the exported file system at NFS server?
[root@rhel6-server ~]# showmount -e 0
Export list for 0:
/nfstest1 *
“OR”
[root@rhel6-server ~]# showmount -e localhost
Export list for localhost:
/nfstest1 *
How to remount a NFS share?
[root@rhel6-test1 ~]# mount -o remount /nfstest1
If the entry done at “/etc/fstab” then,
[root@rhel6-test1 ~]# mount -a
How to remount a NFS share as a read only file system?
[root@rhel6-test1 ~]# mount -o remount,ro /nfstest1/
[root@rhel6-test1 ~]# cd /nfstest1/
[root@rhel6-test1 nfstest1]# touch f3
touch: cannot touch `f3': Read-only file system
[root@rhel6-test1 nfstest1]# cd
[root@rhel6-test1 ~]# mount -o remount,rw /nfstest1/
[root@rhel6-test1 ~]# cd /nfstest1/
[root@rhel6-test1 nfstest1]# touch f3
How to check from client that which version of NFS is running at NFS
server?
[root@rhel6-test1 ~]# rpcinfo -p 192.168.234.146
program vers proto port
service
100000
4 tcp 111
portmapper
100000
3 tcp 111
portmapper
100000
2 tcp 111
portmapper
100000
4 udp 111
portmapper
100000
3 udp 111
portmapper
100000
2 udp 111
portmapper
100024
1 udp 50970
status
100024
1 tcp 52213
status
100011
1 udp 875
rquotad
100011
2 udp 875
rquotad
100011
1 tcp 875
rquotad
100011
2 tcp 875
rquotad
100005
1 udp 37492
mountd
100005
1 tcp 42212
mountd
100005
2 udp 36634
mountd
100005
2 tcp 46191
mountd
100005
3 udp 53602
mountd
100005
3 tcp 48543
mountd
100003
2 tcp 2049
nfs
100003
3 tcp 2049
nfs
100003
4 tcp 2049
nfs
100227
2 tcp 2049
nfs_acl
100227
3 tcp 2049
nfs_acl
100003
2 udp 2049
nfs
100003
3 udp 2049
nfs
100003
4 udp 2049
nfs
100227
2 udp 2049
nfs_acl
100227
3 udp 2049
nfs_acl
100021
1 udp 58489
nlockmgr
100021
3 udp 58489
nlockmgr
100021
4 udp 58489
nlockmgr
100021
1 tcp 51404
nlockmgr
100021
3 tcp 51404
nlockmgr
100021
4 tcp 51404
nlockmgr
Check the version against listed services.
How to know the mounted file system with NFS?
[root@rhel6-test1 ~]# mount |grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
192.168.234.146:/nfstest1 on /nfstest1 type nfs (rw)
How to mount NFS share as UDP?
[root@rhel6-test1 ~]# mount -o udp
192.168.234.146:/nfstest1 /nfstest1
[root@rhel6-test1 ~]# nfsstat -m
/nfstest1 from 192.168.234.146:/nfstest1/
Flags: rw,relatime,vers=4,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.234.200,minorversion=0,local_lock=none,addr=192.168.234.146
/nfstest1 from 192.168.234.146:/nfstest1/
Flags: rw,relatime,vers=4,rsize=32768,wsize=32768,namlen=255,hard,proto=udp,port=0,timeo=11,retrans=3,sec=sys,clientaddr=192.168.234.200,minorversion=0,local_lock=none,addr=192.168.234.146
This option will enhance the read/write performance, but it will lack
on fault tolerance b’coz of connection less behavior of UDP.
How to check the NFS server status remotely?
Check that the server's nfsd processes are responding or not from
client.
[root@rhel6-test1 ~]# rpcinfo -u 192.168.234.146 nfs
rpcinfo: RPC: Program not registered
program 100003 is not available
Check that the server's mountd processes are responding or not from
client.
[root@rhel6-test1 ~]# rpcinfo -u 192.168.234.146 mountd
rpcinfo: RPC: Program not registered
program 100005 is not available
It seems there is some issue,
Check that the NFS services have started on the NFS server or not
from client,
[root@rhel6-test1 ~]# rpcinfo -s 192.168.234.146 |egrep
'nfs|mountd'
Nothing in O/P… LL
Check the status at NFS server,
[root@rhel6-server ~]# service nfs status
rpc.svcgssd is stopped
rpc.mountd is stopped
nfsd is stopped
rpc.rquotad is stopped
Ohhhh, now restart the NFS service at server and recheck the above
commands from client.
[root@rhel6-server ~]# service nfs restart
Shutting down NFS daemon: [FAILED]
Shutting down NFS mountd: [FAILED]
Shutting down NFS quotas: [FAILED]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Stopping RPC idmapd: [ OK ]
Starting RPC idmapd: [ OK ]
Starting NFS daemon: [ OK ]
[root@rhel6-server ~]# service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 13439) is running...
nfsd (pid 13504 13503 13502 13501 13500 13499 13498 13497) is
running...
rpc.rquotad (pid 13435) is running...
Check that the NFS services have started on the NFS server or not
from client,
[root@rhel6-test1 ~]# rpcinfo -s 192.168.234.146 |egrep
'nfs|mountd'
100005 3,2,1
tcp6,udp6,tcp,udp
mountd superuser
100003 4,3,2
udp6,tcp6,udp,tcp
nfs superuser
100227 3,2
udp6,tcp6,udp,tcp
nfs_acl superuser
Check that the server's nfsd processes are responding or not from
client.
[root@rhel6-test1 ~]# rpcinfo -u 192.168.234.146 nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
program 100003 version 4 ready and waiting
Check that the server's mountd processes are responding or not from
client.
[root@rhel6-test1 ~]# rpcinfo -u 192.168.234.146 mountd
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
How to check the NFS server status from NFS server side?
[root@rhel6-server ~]# service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 13439) is running...
nfsd (pid 13504 13503 13502 13501 13500 13499 13498 13497) is
running...
rpc.rquotad (pid 13435) is running...
[root@rhel6-server ~]# rpcinfo -u localhost rpcbind
program 100000 version 2 ready and waiting
program 100000 version 3 ready and waiting
program 100000 version 4 ready and waiting
[root@rhel6-server ~]# rpcinfo -u localhost mountd
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
[root@rhel6-server ~]# rpcinfo -u localhost nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
program 100003 version 4 ready and waiting
How to know that which host is providing NFS sharing?
[root@rhel6-test1 ~]# nfsstat -m
/nfstest1 from 192.168.234.146:/nfstest1/
Flags: rw,relatime,vers=4,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.234.200,minorversion=0,local_lock=none,addr=192.168.234.146
/nfstest1 from 192.168.234.146:/nfstest1/
Flags:
rw,relatime,vers=4,rsize=32768,wsize=32768,namlen=255,hard,proto=udp,port=0,timeo=11,retrans=3,sec=sys,clientaddr=192.168.234.200,minorversion=0,local_lock=none,addr=192.168.234.146
Why same info is showing twice??
Check carefully, I mounted it again with “UDP” port. (look at "proto" field)
How to mount a NFS share with NFS version 2 or 3?
[root@rhel6-test1 ~]# mount -t nfs -o ro,vers=2
192.168.234.146:/nfstest1 /nfstest1
[root@rhel6-test1 ~]# nfsstat -m
/nfstest1
from 192.168.234.146:/nfstest1
Flags: ro,relatime,vers=2,rsize=8192,wsize=8192,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.234.146,mountvers=1,mountport=50180,mountproto=udp,local_lock=none,addr=192.168.234.146
[root@rhel6-test1 ~]# mount
192.168.234.146:/nfstest1 on /nfstest1 type nfs
(ro,vers=2,addr=192.168.234.146)
=================O/P Removed=============================
How to diagnose the mount steps?
[root@rhel6-test1 ~]# mount -vvv 192.168.234.146:/nfstest1
/nfstest1
mount:
fstab path: "/etc/fstab"
mount:
mtab path: "/etc/mtab"
mount:
lock path: "/etc/mtab~"
mount:
temp path: "/etc/mtab.tmp"
mount:
UID: 0
mount:
eUID: 0
mount:
no type was given - I'll assume nfs because of the colon
mount:
spec:
"192.168.234.146:/nfstest1"
mount:
node: "/nfstest1"
mount:
types: "nfs"
mount:
opts: "(null)"
final
mount options: '(null)'
mount:
external mount: argv[0] = "/sbin/mount.nfs"
mount:
external mount: argv[1] = "192.168.234.146:/nfstest1"
mount:
external mount: argv[2] = "/nfstest1"
mount:
external mount: argv[3] = "-v"
mount:
external mount: argv[4] = "-o"
mount:
external mount: argv[5] = "rw"
mount.nfs:
timeout set for Tue May 23 19:14:12 2017
mount.nfs:
trying text-based options
'vers=4,addr=192.168.234.146,clientaddr=192.168.234.200'
192.168.234.146:/nfstest1
on /nfstest1 type nfs (rw)
How to displays the list of shared files and options on NFS server?
[root@rhel6-server ~]# exportfs -v
/testnfs
<world>(rw,wdelay,root_squash,no_subtree_check)
/nfstest1 <world>(rw,wdelay,no_root_squash,no_subtree_check)
How to unexport shares?
[root@rhel6-server ~]# exportfs –ua
[root@rhel6-server ~]# showmount -e 0
Export list for 0:
How to export NFS shares?
[root@rhel6-server ~]# exportfs -a
[root@rhel6-server ~]# showmount -e 0
Export list for 0:
/nfstest1 *
/testnfs *
[root@rhel6-server ~]# exportfs -v
/testnfs
<world>(rw,wdelay,root_squash,no_subtree_check)
/nfstest1
<world>(rw,wdelay,no_root_squash,no_subtree_check)
REFERENCES & GOOD READ:
No comments:
Post a Comment