<---- template headericclude ----->
NFS3 vs NFS4 performance
FedoraForum.org - Fedora Support Forums and Community
Results 1 to 12 of 12
  1. #1
    stevea Guest

    NFS3 vs NFS4 performance

    I happened to be transferring some files from an NFS server and decided to spend a pleasant half hour testing the speed of NFSv3 vs NFSv4. The server and client are the same and the files transferred are the same. 1Gbit network with jumbo packets. The server can only pull data through the file system at about 280Mbit/sec.or a bit better.

    The numbers varied quite a bit.

    For NFSv4 the top rate was 315Mbit/sec. The worst case was 138Mbit/sec.
    For NFSv3 the top rate was 239Mbit/sec and the worst was 115Mbit/sec.

    In each individual case NFSv4 was faster, typically by ~20%, tho' with a few anomalies where it was much faster. The top figures occurred when it was likely that the server held many much of the file data in memory buffers. The low figures are close to the typical value for unbuffered files on the server.
    ==

    Not only is the NFSv4 tcp protocol cleaner and more featureful, it's also faster.

    -S

  2. #2
    Join Date
    Aug 2006
    Location
    /dev/realm/{Abba,Carpenters,...stage}
    Posts
    3,285
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Thanks for the info. It seems to me that we cannot choose the nfs version on Fedora, but it seems to be V4 from what I see
    rpm -qai|grep nfs
    URL : http://nfsv4.bullopensource.org
    Name : system-config-nfs Relocations: (not relocatable)
    Group : System Environment/Base Source RPM: system-config-nfs-1.3.41-1.fc10.src.rpm
    URL : http://fedoraproject.org/wiki/SystemConfig/nfs
    system-config-nfs is a graphical user interface for creating,
    modifying, and deleting nfs shares.
    URL : http://nfsv4.bullopensource.org/
    URL : http://www.citi.umich.edu/projects/nfsv4/linux/
    Name : nfs4-acl-tools Relocations: (not relocatable)
    Group : System Environment/Tools Source RPM: nfs4-acl-tools-0.3.2-2.fc9.src.rpm
    URL : http://www.citi.umich.edu/projects/nfsv4/linux/
    Summary : The nfs4 ACL tools
    Name : nfs-utils Relocations: (not relocatable)
    Group : System Environment/Daemons Source RPM: nfs-utils-1.1.4-8.fc10.src.rpm
    URL : http://sourceforge.net/projects/nfs
    The nfs-utils package provides a daemon for the kernel NFS server and
    This package also contains the mount.nfs and umount.nfs program.
    Name : nfs-utils-lib Relocations: (not relocatable)
    Group : System Environment/Libraries Source RPM: nfs-utils-lib-1.1.4-1.fc10.src.rpm
    URL : http://www.citi.umich.edu/projects/nfsv4/linux/
    For safer browsing, use OpenDNS nameservers 208.67.222.222 and 208.67.220.220

    SELinux User Guide

    AutoPager

  3. #3
    Join Date
    Oct 2004
    Location
    London, UK
    Posts
    4,991
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I would flippin hope so (that nfs4 > nfs3)

    Have you compared to ftp transfer speed? (and, for a laugh, ssh)

  4. #4
    Join Date
    Sep 2004
    Posts
    2,006
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by sideways
    I would flippin hope so (that nfs4 > nfs3)

    Have you compared to ftp transfer speed? (and, for a laugh, ssh)
    actually its not funny - ssh is actually as fast as nfs4 on fedora/centos - there's something very fscked up with redhat's nfs implementation, my mac handles nfs much better.

    i find nfs3 to be pretty unstable (at least when used with nautilus or autofs) on fedora, nfs4 is better but far from perfect.

    ftp of course is miles faster than ssh or nfs, being the lowest overhead of all protocols.

    have a look at the benchmarks i got previously here: http://forums.fedoraforum.org/showpo...46&postcount=5 with ftp being 3x faster than nfs4/ssh, 8x faster than nfs3.....
    Last edited by sej7278; 11th April 2009 at 05:17 PM.

  5. #5
    stevea Guest
    Quote Originally Posted by Nokia
    Thanks for the info. It seems to me that we cannot choose the nfs version on Fedora, but it seems to be V4 from what I see
    Not true. The default is V3 and the GUI tool doesn't have any V4 config options. The same Linux server may respond to NFSv4 or NFSv3 clients for the same share depending on the /etc/exports config.

    The server needs something like this in the server's /etc/exports .....
    /home/boy *(rw,insecure,sync,mp=/home,no_root_squash,fsid=0,no_subtree_check)
    The "fsid=0" means this is the "root" of the NFSv4 share for this server. no_subtree_check and several other options are NFSv4 specific.

    If a client mounts the share above like this ...
    mount -t nfs serversys:/home/boy /some/mountpt
    you get NFSv3 protocol

    If you mount like this ...
    mount -t nfs4 serversys:/ /some/mountpt2
    you get NFSv4 protocol. ** Note that you do not mention the served path "/home/boy" for NFSv4 mount - you use "/" for the fsid=0 root share. You can use uuids or fsids as determined by the server if you really need multiple shares from one server. The suggested bethod is to add these as subtrees and ...

    The V3 and V4 protocols are very different - a subset of the abstract commands are the some but even the command transport is very different.
    ---

    I'm a little puzzled by your French links .... The Citi.umich.edu project has provided all the model code (and some of the actual code) for both the BSD and Linux implementations.
    http://www.citi.umich.edu/projects/nfsv4. Yes the Linux kenel NFS server and some of the supplicant processes ... gssd, srvgssd, idmap have part of the code w/ CITI copyright. Linux and BSD followed very different paths in dividing the NFS work between kernel and userspace.

    There is an NFSv4.1 in draft and 4.2 in progress, but IMO it's long past time we all switch to a new networked file system design. Andrew anyone ?
    Last edited by stevea; 12th April 2009 at 12:32 AM.

  6. #6
    stevea Guest
    Quote Originally Posted by sideways
    I would flippin hope so (that nfs4 > nfs3)

    Have you compared to ftp transfer speed? (and, for a laugh, ssh)

    Actually NFSv4 has a huge number of additional features; It uses tcp only (port 2049) so you ncan make a pinhole in the server firewall unlike NFSv3. It supports ACLs(RH back-hacked a differnt form of ACLs into NFSv3). There are several cross-system mapping schemes . The basic (same uid,gid) used by NFSv3 exists. Then there is the "same name, same domain" mapping implemented by the idmapd daemon on each end. You can also use in-band gssapi (like kerberos of libkey or spkm3) so you can for example kerberize your login authentication and then use LDAP w kerberos for the NFSv4 server side mapping. Also of course the kerberos etc can be use to authenticate/verify/encrypt the traffic (3 options). This is all built into the protocol.

    So it's a surprise IMO that V4 is faster .. I expected slower with the added protocol layers.

    ===
    scp transferred at 162-190 Mbit/sec in two tries (same server/client as above. That's very fast considering the encrypt/decrypt overhead. But of course scp doesn't give you access to a file system - it just copies files.

    sshfs file system - did suprisingly well 155-156Mbit/second. I'm a bit surprised that sshfs did so well. It does provide encryption (which is IMO an excessive cost on my soho LAN) but sshfs is severely simplistic when it comes to mapping IDs or letting multiple accounts on the client access the served files. The sshfs mount point inode is only accessible by the mouting account.

    # in this case root mounted an sshfs at /tmp/mp. Note that stevea account belongsd to the "everyone" group.

    [root@lycoperdon Desktop]# ls -ld /tmp/mp
    drwxrwxr-x 1 root everyone 4096 2009-04-10 21:49 /tmp/mp

    [root@lycoperdon Desktop]# su stevea
    [stevea@lycoperdon Desktop]$ ls -ld /tmp/mp
    ls: cannot access /tmp/mp: Permission denied
    [stevea@lycoperdon Desktop]$ id
    uid=520(stevea) gid=520(stevea) groups=520(stevea),599(everyone) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
    --

    If stevea mounts the sshfs, then root gets permission denied !

    So it's a one-user mount.

    [edit]

    I won't bother w/ ftp - again it's a copy program (and there is a fuse file system scheme) but it's likely to have no advantage and the same inadequate mapping scheme (one use mount).
    Last edited by stevea; 12th April 2009 at 12:34 AM.

  7. #7
    stevea Guest
    Quote Originally Posted by sej7278
    actually its not funny - ssh is actually as fast as nfs4 on fedora/centos - there's something very fscked up with redhat's nfs implementation, my mac handles nfs much better.

    i find nfs3 to be pretty unstable (at least when used with nautilus or autofs) on fedora, nfs4 is better but far from perfect.

    ftp of course is miles faster than ssh or nfs, being the lowest overhead of all protocols.

    have a look at the benchmarks i got previously here: http://forums.fedoraforum.org/showpo...46&postcount=5 with ftp being 3x faster than nfs4/ssh, 8x faster than nfs3.....

    Very interesting insights. First it's not totally surprising that NFS is slower than a copy program since ssh isn't dealing with the file metadata at all and doesn't have to understand file synchronization and locking - a complex issue at the server side. My expectation was that encrypt/decrypt to take a heavier CPU load and slow the transfer (this was a bad issue back around FC3/4 on laptops). OTOH NFS has layers of heavy protocol.

    If the constantly changing Gnomic interface to NFSv3 is broken it's no surprise. These guys aren't playing with a full deck. I used to have a some accounts where the ~/Desktop was a soft link to another local file system (~ was where /etc/passwd said and the ~ directory had a "ln -s ..." link to the Desktop) and Gnome completely refused to make launchers there or create a trash icon - otherwise it mostly worked. Recently the Gnomes have it in their tiny little heads that they are writing an OS - so now they have created their own file-sys interface ... adding to the list of 85% functional Gnome stuff. Don't blame Linus NFSv3 - it's most likely Gnome.

    I don't know what the Mac got there implementation of NFSv4. It's only been in very recent FreeBSD releases and a bit experimental and with loose ends there so Apple probably did their own port.. Perhaps NetBDS had an earlier implementation. It doesn't surprise me that one could do better than the Linux implementation. All of the Linux NFSv4 implementation requires a lot of intercommunication between the kernel land the userspace programs. In part that's unavoidable, but alternative implementations could be faster at the cost of bigger kernel size or less recent access to configurations. These mostly communicate thru the /sys pseudo-file system and I can't believe that this method is efficient, nor is it easy to schedule the daemons to run when a record is ready.

    =======
    I think your link to your other post - you are a guy with a configuration problem. Don't blame NFS - that's pilot error.

    You can ftp at (converted to my units) 308Mbit/sec which is most certainly limited by your disk rate (a fair bit faster then my old SATAs in my server). When you go to use NFSv3 over a bonded pair of gigabits (I use a single) you gets ...45Mbit/sec. and a load of error messages. My slowest NFSv3 transfer was 2.5 times faster that this, and I have slower disks and a slower network. You absolutely have a config or driver problem unrelated to NFS per se.

    When you switches to NFSv4 the errors disappear but you only getting 103MBit/sec. My worst case for NFSv4 was 1.3 times faster using inferior disks and an inferior network (F10 client btw). Nope - sorry, the performance issue is not due to NFSv4 per se.

    Maybe you have a defective enet driver or more likely a config, maybe you're not using jumbo packets or have a size mismatch that causes extra work. Maybe your CPUs are overloaded with an encryption you chose. Maybe your NFSv4 config is just plain wrong and something is getting done the hard way or vi retries. Maybe you're using some CPU costly encryption and one of the CPUs isn't up to the task.

    As I showed my V4 transfers were radically faster when the files were cached. W/ your faster server I/O - you should see about ~150MBit+/sec worst case rate from his system, Maybe higher.

    The Mac OSX speed you cites is 240Mbit/sec. I showed that in a case where the served files were partly cached on the server that Linux NFSv4 on my lesser hardware can get 315Mbit/sec. Without knowing more this is not proof the Mac OSX has a faster than Linux NFSv4. We'd need a much better test conditions than this. It's entirely possible that Mac is faster, but that post is not evidence since you clearly have some problem on your Linux system.

    --
    FWIW I've been working heavily w/ NFS (kernel implementation) for a good bit of the past year. If you can post more abt your config perhaps I can help.

    What are the CPU & memory situation and other loading like ?
    What are the network interfaces & drivers ? I think bonding is just an additional headache since your disk I/O is much slower than a single gigabit. Bets to take a single pair of interface, setup the same jumbo packet size an dmake certain that the jumbo's get thru - both ways - w/o fracture in a switch or something. You'll need wireshark (both ends) for that. tcptest is a good test of tcp speed and also let's you tweak the wire characteristics.

    Then it would be smart to diagnose where all those NSFv3 errors were coming from. Something was certainly wrong there and you may just be masking a problem by switching to V4.

    You'll need to pull out Wireshark and examined the traffic ? Especially the turnaround tie on requests/replies. Something is obvously yet wrong with your system/config.

    Is the same server serving the MAC & Lin clients ? If sow e can mostly exonerate it.
    What do the client and server configs look like , including MAC ?

    -S
    Last edited by stevea; 12th April 2009 at 12:07 AM.

  8. #8
    Join Date
    Oct 2004
    Location
    London, UK
    Posts
    4,991
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    as usual, superb info stevea

  9. #9
    Join Date
    Sep 2004
    Posts
    2,006
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    sorry to dig out an old thread but i'm still having speed issues and hadn't previously noticed that the thread had been responded to.

    i'll try to answer some of steva's questions (note i'm running centos 5.3 on server now):

    the mac is accessing the same server (note the mac is using nfs3 to connect to an nfs4 server, which is possible with limitations);

    the server has a realtek 8169 gigabit pci card, the linux client has a realtek 8111 onboard gigabit card (both use the r8169 driver), the mac mini has some brand of onboard gigabt (who knows with apple - maybe intel 1000?);

    the server has the slowest (2.4ghz athlonxp) cpu of all - the mac is a 1.8ghz core2duo, the linux a 3.2ghz core2quad, none of them really see much cpu usage when doing transfers, certainly no more than 25% cpu;

    could JFS on the server be the problem (although i think i've tested with the ext3 boot drive too);

    i've given up with nic bonding, and can't use jumbo frames as the realteks don't seem to work at all with it enabled (switch supports it) and they only support 7k frames;

    i'm not using encryption on nfs, and ssh worked fine;

    on the client, in /etc/fstab i have:

    Code:
    server:/data0  /media/data0  nfs4  noauto,rw,user,hard,intr,timeo=600,rsize=32768,wsize=32768  0 0
    on the server in /etc/exports i have:

    Code:
    /export    192.168.0.0/24(rw,fsid=0,insecure,no_subtree_check,async)
    /export/data0    192.168.0.0/24(rw,nohide,insecure,no_subtree_check,async)
    i still get nautilus greying out when i write large files (not read iirc) and i suspect its something to do with async as the progress bar goes really fast, then just stops when doing the last few megabytes - unless its JFS being slow to write to?

    to update the benchmarks, i just copied some iso/avi files from client to server (i.e. writing to the nfs server) via the console not nautilus rubbish:

    SCP 350mb
    real 0m33.323s
    real 0m21.645s

    SCP 700mb
    real 0m44.360s

    SCP 3.2gb
    real 3m13.181s

    NFSv4 350mb
    real 0m25.882s
    real 0m12.656s
    real 0m19.647s

    NFSv4 700mb
    real 0m42.595s
    real 0m49.849s
    real 0m34.633s
    real 0m39.150s

    NFSv4 1.4gb
    real 1m54.000s

    NFSv4 2.3gb
    real 1m50.307s

    NFSv4 3.2gb
    real 4m23.481s

    NFSv4 4x700mb
    real 4m23.987s

    which is odd as that's saying that nfs4 is slightly faster than scp for smaller (<1gb) files only, but quite a bit slower for larger files - also the 2nd time i write a the same file over nfs4 seems to be faster, is it caching somehow (i'm deleting first, so its not overwriting)?

    is it worth trying the noatime option on the client?

    running wireshark on the client i'm seeing a few of these in red (.6 is the server, .3 is the client):

    Code:
    717348	106.617317	192.168.0.6	192.168.0.3	TCP	[TCP ACKed lost segment] [TCP Previous segment lost] nfs > apex-edge [ACK] Seq=8177745 Ack=845286573 Win=64128 Len=0 TSV=2116210 TSER=398913346
    717349	106.617328	192.168.0.3	192.168.0.6	TCP	[TCP Previous segment lost] [TCP segment of a reassembled PDU]
    EDIT: from what i've read those tcp lost segment bits from wireshark are not a problem, so its not a network problem as far as i can see.....

    netstatt -plntu on the server is showing these nfs-related services (no firewall):

    Code:
    tcp        0      0 0.0.0.0:711                 0.0.0.0:*                   LISTEN      2225/rpc.statd
    tcp        0      0 0.0.0.0:908                 0.0.0.0:*                   LISTEN      2425/rpc.mountd
    tcp        0      0 0.0.0.0:879                 0.0.0.0:*                   LISTEN      2395/rpc.rquotad
    tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN      2200/portmap
    tcp        0      0 0.0.0.0:35100               0.0.0.0:*                   LISTEN      -         
    udp        0      0 0.0.0.0:2049                0.0.0.0:*                               -       
    udp        0      0 0.0.0.0:905                 0.0.0.0:*                               2425/rpc.mountd
    udp        0      0 0.0.0.0:57875               0.0.0.0:*                               -     
    udp        0      0 0.0.0.0:705                 0.0.0.0:*                               2225/rpc.statd
    udp        0      0 0.0.0.0:708                 0.0.0.0:*                               2225/rpc.statd
    udp        0      0 0.0.0.0:876                 0.0.0.0:*                               2395/rpc.rquotad
    udp        0      0 0.0.0.0:111                 0.0.0.0:*                               2200/portmap
    Last edited by sej7278; 13th May 2009 at 09:37 PM.

  10. #10
    Join Date
    Sep 2004
    Posts
    2,006
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    i added the noatime option on the clients and that sped things up a bit, but nautilus still goes compiz-grey/gray when writing files over nfs v3/4 if the mountpoint is in /media, but not /mnt.

    samba/cifs doesn't suffer from they grey issue in either case; and takes 14s to transfer 700mb (nfs4 ~41s) and 1m20 to transfer 4x700mb (nfs4=4m24) with no speed optimisations, so is 3x the speed of nfs!

    i thought i'd just test read speeds (copying from server to local disk) as i didn't think there was a problem there:

    samba 700mb: 40.381s
    samba 1.4gb: 56.798s

    nfs4 700mb: 37.010s
    nfs4 1.4gb: 53.795s

    it seems nfs4 is slightly faster than cifs, as i expected.

    so reads are about twice as fast as writes, but then the local disks (write) are almost twice as fast as the nfs server's disks (read) so it doesn't really prove anything, so i tried it the other way around - with the server acting as the client (writing to the slow disk) and the client acting as the server (reading from the fast disk):

    nfs3 700mb: 11.954s
    nfs3 1.4gb: 42.408s

    so now i'm totally confused as the 700mb figure is really fast, and the 1.4gb is only slightly faster than the other way around (and its nfs v3 not v4).
    Last edited by sej7278; 13th May 2009 at 10:17 PM.

  11. #11
    stevea Guest
    Sorry I'm just getting back to this thread.


    Quote Originally Posted by sej7278
    sorry to dig out an old thread but i'm still having speed issues and hadn't previously noticed that the thread had been responded to.

    i'll try to answer some of steva's questions (note i'm running centos 5.3 on server now):

    the mac is accessing the same server (note the mac is using nfs3 to connect to an nfs4 server, which is possible with limitations);

    the server has a realtek 8169 gigabit pci card, the linux client has a realtek 8111 onboard gigabit card (both use the r8169 driver), the mac mini has some brand of onboard gigabt (who knows with apple - maybe intel 1000?);

    the server has the slowest (2.4ghz athlonxp) cpu of all - the mac is a 1.8ghz core2duo, the linux a 3.2ghz core2quad, none of them really see much cpu usage when doing transfers, certainly no more than 25% cpu;
    FWIW I've tested with both RHEL5.1 and 5.2 NFSv4 server. The NFSv4 implementation between 5.1 and 5.2 is significantly modified, both by the Linux kernel changes and the RH patchset. 5.2 performance is modestly improved and the file attribute support (ACLs, idmapping) is functionally improved. I have no idea what has changed with RHEL/Centos5.3 NFSv4.

    My test systems have various mixes of Intel E1000's (mobile and non-) and Marvell 88xxx interfaces. There have been problems reported in past Fedora/Linux r8169 driver. I have no idea if that driver is part of your performance problem, but I would certainly you test that part to see if sustained TCP traffic is fast both in & out.. ttcp or nuttcp will tell you about your network limitations.

    I usually serve files off an older 2.66Ghz Pentium-4,HT ... So I *think* our servers have comparable CPU performance. Still the specific performance could be an issue, but the driver design might be more critical.


    could JFS on the server be the problem (although i think i've tested with the ext3 boot drive too);
    To be honest I don't know, but I think it's unlikely. NFS[34] must read/write the file as well as file metadata - the directly content and the file extended attributes. One way to test file system performance is to use a tool like "tar" locally n the server. Tar the file system to /dev/null and time it, or unpack a tarball to disk from a faster drive of a ramdisk and time it for write. This should give you a vague idea about file times. Generally JFS seem faster in benchmarks than ext3, but it may be slower on file removal and even on file reads (much faster on reconstruction).

    If the Mac NFS access is fast, and the files not cached, then we can ignore the JFS speed issue.

    i've given up with nic bonding, and can't use jumbo frames as the realteks don't seem to work at all with it enabled (switch supports it) and they only support 7k frames;

    i'm not using encryption on nfs, and ssh worked fine;

    on the client, in /etc/fstab i have:

    Code:
    server:/data0  /media/data0  nfs4  noauto,rw,user,hard,intr,timeo=600,rsize=32768,wsize=32768  0 0
    on the server in /etc/exports i have:

    Code:
    /export    192.168.0.0/24(rw,fsid=0,insecure,no_subtree_check,async)
    /export/data0    192.168.0.0/24(rw,nohide,insecure,no_subtree_check,async)

    First, 7k is a respectable jumbo frame size (regular frames are limited to 1532 bytes) and this will make a considerable performance improvement especially on the Gigabit networks. So I would urge to to set all client&server interfaces to this same size. You generally want to set all the interfaces on your LAN to the "least upper bound", that is the smallest of the maximum settings available across all interfaces.

    I see nothing wrong with your settings. My client default negotiates rsize=65536,wsize=65536 for nfsv4. It's useful to look in /proc/mounts to see the real negotiated parameters.


    Now the troubles ....

    i still get nautilus greying out when i write large files (not read iirc) and i suspect its something to do with async as the progress bar goes really fast, then just stops when doing the last few megabytes - unless its JFS being slow to write to?

    to update the benchmarks, i just copied some iso/avi files from client to server (i.e. writing to the nfs server) via the console not nautilus rubbish:
    [....]

    which is odd as that's saying that nfs4 is slightly faster than scp for smaller (<1gb) files only, but quite a bit slower for larger files - also the 2nd time i write a the same file over nfs4 seems to be faster, is it caching somehow (i'm deleting first, so its not overwriting)?

    is it worth trying the noatime option on the client?
    Your result is very odd. It appears that beyond some limit around 1GB that your NFS4 write rate slows dramatically. I'll tye to reproduce this case.

    Yes file caching is a big issue in these tests. You can drop the caches after a run with
    sync && echo 3 > /proc/sys/vm/drop_caches

    noatime prevents the file attribute from being updates every tme the file is accessed. It's probably a decent idea to use noatime for a server's root. It will save a little local transaction for the server, but it's not a big issue, especially for your large single file transfers.



    running wireshark on the client i'm seeing a few of these in red (.6 is the server, .3 is the client):

    Code:
    717348	106.617317	192.168.0.6	192.168.0.3	TCP	[TCP ACKed lost segment] [TCP Previous segment lost] nfs > apex-edge [ACK] Seq=8177745 Ack=845286573 Win=64128 Len=0 TSV=2116210 TSER=398913346
    717349	106.617328	192.168.0.3	192.168.0.6	TCP	[TCP Previous segment lost] [TCP segment of a reassembled PDU]

    EDIT: from what i've read those tcp lost segment bits from wireshark are not a problem, so its not a network problem as far as i can see.....

    I get a few of these too. Your samba data is interesting too. Once again it *looks* like there is a major NFS slowdown somewhere between 700M to 1.4G and beyond.


    So overall it appears that something very bad happens to your NFS transfers as the file size gets large. 700MB seems fairly fast, but 1.4G is notably slower and 3.2GB is aweful. I'll try to reproduce this.






    I'

  12. #12
    Join Date
    Sep 2004
    Posts
    2,006
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    thanks for the input.

    as far as nic driver issues go, the server only has an r8169 so i can't really take that out of the equation, only one linux box has an e1000, plus the mac of course.

    the 1gb filesize slowdown does point at being either some weird nic driver bug, in fact a quick google for "8169 +nfs" shows a few interesting bug reports, including one which mentions setting the block size to 2k instead of 32k to stop the "nfs server not responding" bug, and another that recommends the "nolock" option or not starting nfslock

    as for 7k frames, for some reason if i enable anything bigger than the usual 1500 mtu, nothing works at all - no ssh or ping even, and thats via switch or crossover (bypassing the 100mbit router).

    i think i'm going to convert my pentium4ht into a fileserver with its e1000 and a newer kernel/distro (probably ubuntu 8.04.2 lts) and see how that goes, a few google results point to nfs/r8169 problems in the 2.6.18 kernel that are fixed in 2.6.23

    ttcp reports:
    696.45 Mbit/sec from desktop to nfs server
    556.164 Mbit/sec from nfs server to desktop

    edit: just tried a 65k block size and a 700mb file over nfs4 took 39.408s, so no change there.
    Last edited by sej7278; 14th May 2009 at 04:50 PM.

Similar Threads

  1. nfs4 - mount.nfs4: Protocol not supported
    By flokip in forum Servers & Networking
    Replies: 0
    Last Post: 19th June 2009, 11:51 PM
  2. NFS4 on FC4
    By dcmbrown in forum Guides & Solutions (Not For Questions)
    Replies: 1
    Last Post: 4th December 2006, 04:34 PM
  3. NFS4 Performance? (FC2)
    By klaus_thorn in forum Using Fedora
    Replies: 0
    Last Post: 19th June 2004, 02:48 AM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
[[template footer(Guest)]]