sorry to dig out an old thread but i'm still having speed issues and hadn't previously noticed that the thread had been responded to.
i'll try to answer some of steva's questions (note i'm running centos 5.3 on server now):
the mac is accessing the same server (note the mac is using nfs3 to connect to an nfs4 server, which is possible with limitations);
the server has a realtek 8169 gigabit pci card, the linux client has a realtek 8111 onboard gigabit card (both use the r8169 driver), the mac mini has some brand of onboard gigabt (who knows with apple - maybe intel 1000?);
the server has the slowest (2.4ghz athlonxp) cpu of all - the mac is a 1.8ghz core2duo, the linux a 3.2ghz core2quad, none of them really see much cpu usage when doing transfers, certainly no more than 25% cpu;
could JFS on the server be the problem (although i think i've tested with the ext3 boot drive too);
i've given up with nic bonding, and can't use jumbo frames as the realteks don't seem to work at all with it enabled (switch supports it) and they only support 7k frames;
i'm not using encryption on nfs, and ssh worked fine;
on the client, in /etc/fstab i have:
Code:
server:/data0 /media/data0 nfs4 noauto,rw,user,hard,intr,timeo=600,rsize=32768,wsize=32768 0 0
on the server in /etc/exports i have:
Code:
/export 192.168.0.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/data0 192.168.0.0/24(rw,nohide,insecure,no_subtree_check,async)
i still get nautilus greying out when i write large files (not read iirc) and i suspect its something to do with async as the progress bar goes really fast, then just stops when doing the last few megabytes - unless its JFS being slow to write to?
to update the benchmarks, i just copied some iso/avi files from client to server (i.e. writing to the nfs server) via the console not nautilus rubbish:
SCP 350mb
real 0m33.323s
real 0m21.645s
SCP 700mb
real 0m44.360s
SCP 3.2gb
real 3m13.181s
NFSv4 350mb
real 0m25.882s
real 0m12.656s
real 0m19.647s
NFSv4 700mb
real 0m42.595s
real 0m49.849s
real 0m34.633s
real 0m39.150s
NFSv4 1.4gb
real 1m54.000s
NFSv4 2.3gb
real 1m50.307s
NFSv4 3.2gb
real 4m23.481s
NFSv4 4x700mb
real 4m23.987s
which is odd as that's saying that nfs4 is slightly faster than scp for smaller (<1gb) files only, but quite a bit slower for larger files - also the 2nd time i write a the same file over nfs4 seems to be faster, is it caching somehow (i'm deleting first, so its not overwriting)?
is it worth trying the noatime option on the client?
running wireshark on the client i'm seeing a few of these in red (.6 is the server, .3 is the client):
Code:
717348 106.617317 192.168.0.6 192.168.0.3 TCP [TCP ACKed lost segment] [TCP Previous segment lost] nfs > apex-edge [ACK] Seq=8177745 Ack=845286573 Win=64128 Len=0 TSV=2116210 TSER=398913346
717349 106.617328 192.168.0.3 192.168.0.6 TCP [TCP Previous segment lost] [TCP segment of a reassembled PDU]
EDIT: from what i've read those tcp lost segment bits from wireshark are not a problem, so its not a network problem as far as i can see.....
netstatt -plntu on the server is showing these nfs-related services (no firewall):
Code:
tcp 0 0 0.0.0.0:711 0.0.0.0:* LISTEN 2225/rpc.statd
tcp 0 0 0.0.0.0:908 0.0.0.0:* LISTEN 2425/rpc.mountd
tcp 0 0 0.0.0.0:879 0.0.0.0:* LISTEN 2395/rpc.rquotad
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 2200/portmap
tcp 0 0 0.0.0.0:35100 0.0.0.0:* LISTEN -
udp 0 0 0.0.0.0:2049 0.0.0.0:* -
udp 0 0 0.0.0.0:905 0.0.0.0:* 2425/rpc.mountd
udp 0 0 0.0.0.0:57875 0.0.0.0:* -
udp 0 0 0.0.0.0:705 0.0.0.0:* 2225/rpc.statd
udp 0 0 0.0.0.0:708 0.0.0.0:* 2225/rpc.statd
udp 0 0 0.0.0.0:876 0.0.0.0:* 2395/rpc.rquotad
udp 0 0 0.0.0.0:111 0.0.0.0:* 2200/portmap