Am new to linux and am having trouble setting up a software RAID5 file server.
I have converted an old system containing 1800+ athlon, 1GB RAM, 400W PSU, 120GB IDE HDD.
I then added a 4 port Silicon Image SLI 3114 SATA RAID controller (cant afford proper hardware RAID). Connected to the controller are 3*300GB drives.
I managed to install FC5, do all updates currently available through yum.
Linux is installed on the 120GB IDE HDD.
I then partitioned each of the sata drives into 10 identical partitions per drive at approx 30GB each. These partitions are set to 'Linux RAID Autodetect'. I am currently testing with just 2 arrays assembled consisting of /dev/sda1 /dev/sdb1 /dev/sdc1 and /dev/sda2 /dev/sdb2 /dev/sdc2 and are named /dev/md1 and /dev/md2 respectivley.
mdadm.conf file is correct but linux refuses to assemble these on boot. It also refuses to assemble these from shell when I use the command 'mdadm --assemble /dev/md1'. I managed to get each of the arrays to assemble using 'mdadm --assemble --update summaries /dev/md1'
I added the assemble commands to /etc/rc.d/rc.local so at that point every time i restart the system the raid devices are mounted and reported claen and syncd
Next i created an LVM VG named /dev/LVGraid which consisted of /dev/md1 and /dev/md2. I then added a LV named /dev/LVGraid/lv0 which used all of the space available to the VG. This formatted as ext3 and seems happy.
Whenever I restarted this VG was not available, presumably because the RAID arrays were being started after the VG was started. To fix this i added another command to /etc/rc.d/rc.local after the RAID assembles 'vgchange -ay LVGraid'.
This worked in that at every boot lv0 became available and was reported correctly.
To be sure with it all I ran 'e2fsck -f /dev/LVGraid/lv0' which reported problems with the image of the file system, I fixed and ran again. Every time I run it similar errors are reported. Even after running with bad blocks a number of times the immage is not correct.
I decided to continue anyway since the filesystem was reported clean unless i forced the check. I mounted to /raid and everything seemed fine untill I attempted to copy data to it. When transfering anything to it, after about 250mb i get an error saying canot write to lv0 so I am forced to abort the transfer. When trying to resume the transfer I get errors saying I dont have permission to write to /raid. After a reboot I am allowed to write again but after 250MB the same error repeats.
Any suggestions on how to fix this would be appreciated as I am on the verge of giving up and resorting to windows which i would rather not do!