Re: [wlug] wlug Digest, Vol 191, Issue 16

On Wed, 2007-05-30 at 22:36 +1200, Eric Light wrote:
Hi Elroy, Simon, and Daniel,
Thanks for your responses. I've read similar things re: setting up RAID1 on an already-installed system being heinous and unbecoming. I'd read that Fedora supports fakeraid (at least the Nvidia flavour) so I decided to wipe it all and give that a try, to see if I can nut this out myself. The results were comical... RAID works perfectly (not the nvraid, but FC supports booting from software raid devices).
My experience with software RAID1 under Ubuntu was rather good. I did a RAID install with two IDE drives for the first time about two weeks ago which was surprisingly easy. And I've had a play with raid5 on three scsi drives. Also rebuilt a server last week.. two 200G SCSI drives, both partitioned as 198G raid and 2G swap, then configured the two raid partitions into one /dev/md0, then went back and set up /dev/md0 as ext3 root. Only one thing to watch out for, if the drives were already partitioned for RAID the installer gets horribly confused. Remove all previous partitions from all the drives and reboot so the installer kernel doesn't have any 'preconceptions' about how the drives are configured, and everything should be fairly straightforward after that.

On 5/31/07, Bruce Kingsbury <zcat(a)wired.net.nz> wrote:
My experience with software RAID1 under Ubuntu was rather good. I did a RAID install with two IDE drives for the first time about two weeks ago which was surprisingly easy. And I've had a play with raid5 on three scsi drives.
Also rebuilt a server last week.. two 200G SCSI drives, both partitioned as 198G raid and 2G swap, then configured the two raid partitions into one /dev/md0, then went back and set up /dev/md0 as ext3 root.
Only one thing to watch out for, if the drives were already partitioned for RAID the installer gets horribly confused. Remove all previous partitions from all the drives and reboot so the installer kernel doesn't have any 'preconceptions' about how the drives are configured, and everything should be fairly straightforward after that.
<noob state="clueless"> Could someone please explain briefly the difference between the different categories of RAID. I'm still a little confused. Where do RAID 0, RAID 1 and RAID 5 differ? Isn't a raid a raid? </noob> -- James Pluck PalmOS Ergo Sum "Dear IRS: I would like to cancel my subscription. Please remove my name from your mailing list..."

On 31/05/07, James Pluck <papabearnz(a)gmail.com> wrote:
Could someone please explain briefly the difference between the different categories of RAID. I'm still a little confused. Where do RAID 0, RAID 1 and RAID 5 differ? Isn't a raid a raid?
You generally can't bet Wikipedia's definitions of things, so I won't try. "...A redundant array of inexpensive (or independent) drives (or disks) is an umbrella term for data storage schemes that divide and/or replicate data among multiple hard drives. They offer, depending on the scheme, increased data reliability and/or throughput." http://en.wikipedia.org/wiki/RAID#Standard_RAID_levels -- simon

Could someone please explain briefly the difference between the different categories of RAID. I'm still a little confused. Where do RAID 0, RAID 1 and RAID 5 differ? Isn't a raid a raid?

<noob state="clueless"> Could someone please explain briefly the difference between the different categories of RAID. I'm still a little confused. Where do RAID 0, RAID 1 and RAID 5 differ? Isn't a raid a raid? </noob>
RAID0 -- stack the drives end to end so your two 60GB drives look like one big 120GB drive. If either drive fails, things stop working. RAID1 -- Mirror the drives so your two 60GB drives look like one 60GB logical drive. If either of the physical drives fails, the system keeps working from the good drive until you replace the faulty one. RAID5 -- take three or more drives and spread the data across them with interleaved reed-solomon error correction. Your five 60GB drives end up looking like one 4*60GB volume. If any drive fails the missing data can still be figured out using the data and checksums from the good drives and the system keeps running until you replace the faulty drive. afaik there are different ways of configuring raid5 to use different numbers of drives and allow for different numbers of them to fail before the system stops working..
participants (5)
-
Bruce Kingsbury
-
Craig Box
-
Eric Light
-
James Pluck
-
Simon Green