The RAID array was resyncing because of an unclean shutdown. Several processes got stuck and I was unable to end them in any manner that I am aware of (kill, kill -9, pkill etc). In the end it was a hard reset which obviously upset the array.
However, my tests occured once the resync had completed so this shouldn't have affected the performance.
As for SMART, there is no warning that you describe abou the Samsung drive, but I will check for updates. After a restart, the array is performing at a much better efficiency. SMART is only running via the command line, the daemon is not running in the background so should prevent this.
dd if=/dev/zero of=/share/largefile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.87954 s, 373 MB/s
Unfortunately, a subsequent test didnt go as well. I attempted to use the same parameters except with a count of 8192. It seemed to be working then stopped.
(There is supposed to be an image above this text but it isn't showing up :s - here is the link gkrellm screenshot
Also, I have 4GB DDR3 at 1,333 MHz and 4GB of Swap on my main drive where the OS is located - this is a Corsair ForceGT SSD. I am also running an Intel Core i7 2600K at stock settings - I know the specs are overkill for what is essentially a server to me, but I got the CPU cheap from a friend at Intel :P In any case, it shouldn't be the bottleneck.
Time for a memory test and firmware update it seems.
Thanks again for all your help!
##### EDIT #####
Ok, lots of memory tests later there is something wrong... The modules together were for one in single channel (the Manual for the MoBo and QVL contradict each other on which DIMMs are on the same channel). But, the two modules together cause 1 or 2 errors using Memtest86+ but each module by itself has no errors so a very weird error. The set I have (2x 2GB) isn't on the QVL BUT the code for a single stick is? Very strange. In any case, I will more than likely be investing in new memory to see if it fixes these issues
Once I get it, i'll let you guys know the results.
##### EDIT 2 #####
Ok, new memory purchased, installed and tested using memtest86+ and no errors (phew)
Had to wait a further 4.5 hours for the array to resync due to unclean shutdown because of stuck processes (in D state). Finally tested and it appears to be better, hopefully I won't suffer any more errors.
dd if=/dev/zero of=/share/largefile bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 66.3043 s, 130 MB/s
Thanks again for all your help. Topic may have gone off on a bit of a tangent but hopefully the issue is resolved and documented for someone else :P
Ok well that was short lived
The problem is back. Did a transfer of maybe 60-80GB and then suddenly the speed dropped to below 1 MBps. This just doesnt make any sense, Hard Drives are clean with no bad sectors and memory is brand new and clean
Everytime this happens, cp seems to enter a process state of 'D' (uninteruptable sleep) which apparently means it is waiting on I/O. All the drives are still responding though, albeit the RAID array is incredibly slow. This I believe is because of the HDDs im using. I found an article that was performing benchmarking including multi-process writes - the Samsung F2 drives performed terribly at around 6-8 MBps which is somewhat near that which im experiencing. SO, the big question is why does cp keep ending up hanging on I/O?
This post is getting pretty long now but I figure its better to edit than create a new post. Hopefully someone has some idea
Im currently trying to perform a backup due to a systems failure on another machine so this fault has completely messed everything up over the past few days.
Just in case this is helpful