Many distributions will allow you to configure a fresh install with RAID, but this page describes how to transition to it once a Linux distribution is already installed in a non RAID partition.
For the typical home user, RAID0 (parallel throughput with no redundancy) works well because it requires the least number of drives. The setup used in this example uses one hard drive on IDE0, one on IDE1 (that means the drives are on separate cables) and one SCSI drive. It is important that each drive be on a different channel.
In my experience, Linux software RAID is very efficient and you should expect nearly twice the throughput for two drives and three times the throughput for three drives, etc.
A RAID partition is made up of a collection, or array of hard drive partitions. A RAID0 partition is sized on the smallest partition in the collection. Therefore, when setting up the partitions with fdisk, make each partition in the collection as near the same size as you can to avoid wasting space. For example, in a two partition RAID0 with hda1 set at 20 Gb and hdc1 set to 30 Gb will result in 10 Gb of hdc1 being completely inaccessible.
The size of a RAID0 partition is simply the size of the smallest partition times the number of partitions in the collection. For the above example, the RAID0 would be 40 Gb.
All typed keystrokes are in green.
dmesg -s65536 | less
hdb: 55704096 sectors (28520 MB) w/512KiB Cache, CHS=3467/255/63, UDMA(33) hdc: 37615536 sectors (19259 MB) w/418KiB Cache, CHS=37317/16/63, UDMA(33) SCSI device sda: 17783240 512-byte hdwr sectors (9105 MB)Since the SCSI drive is the smallest, I decided to use all of it for the RAID0 partition. Then configure a 9 Gb partition on hdb and hdc. The resulting raid partition will be 27 Gb which is plenty for any Linux Distribution.
root@bultaco:~# fdisk /dev/sdaClink on this link to see details of partitioning /dev/sda
NOTE: Be sure to set all partitions in the RAID collection to type "fd". Otherwise the RAID partition will not be recognized by the kernel at boot up.
/dev/sda1, the first RAID0 partition in the collection looks like this:
Disk /dev/sda: 9105 MB, 9105018880 bytes 64 heads, 32 sectors/track, 8683 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sda1 1 8683 8891376 83 LinuxNote also that number of bytes can be computed pretty close with
64 heads(8683 cyl/2 heads)(2 track/1 cyl)(32 sect/1 track)(512 byte/1 sect) = 9104785408 bytes
or
8891376(1024 byte/1 block) = 9104769024
This gives a difference of 9104785408 - 9104769024 = 16384 bytes (why is this? Maybe because of the LBA translation?)
root@bultaco:~# fdisk /dev/hdbThe end result is the creation of /dev/hdb2 as shown in the following listing:
Command (m for help): p Disk /dev/hdb: 28.5 GB, 28520497152 bytes 255 heads, 63 sectors/track, 3467 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdb1 1 32 257008+ 82 Linux swap /dev/hdb2 33 1139 8891977+ fd Linux raid autodetect /dev/hdb3 1140 3467 18699660 83 LinuxThe Start and End cylinder numbers were calculated as follows:
N0 = CE0 - CS0+ 1 = 8683 - 1 + 1 = 8683
N1 = H0*N0*S0 / (H1*S1) = 64*8683*32 / (255*63) = 1106.927
CE1 = N1 + CS1 + 1 = 1107 + 33 + 1 = 1141
Where N0: Number of cylinders in initial partition CS0: Starting cylinder number of initial partition CE0: End cylinder number of initial partition H0: Number of logical heads for the initial drive S0: Number of sectors/track in the initial drive N1: Computed number of cylinders to for the next partition CS1: Starting cylinder number of next drives partition CE1: End cylinder number of next drives partition H1: Number of logical heads for the next drive S1: Number of sectors/track in the next drive
If you set the End cylinder to match the displayed number of blocks you get 1139 instead of 1141. I suspect it's more accurate to use 1141 but it's just pocket change, so either way it's okay.
Disk /dev/hdc: 19.2 GB, 19259154432 bytes 255 heads, 63 sectors/track, 2341 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 1 32 257008+ 82 Linux swap /dev/hdc2 33 1139 8891977+ fd Linux raid autodetect /dev/hdc3 1140 2341 9655065 83 LinuxThe calculation works out the same for this drive as it did for /dev/hdb.
The raid array of interest here is /dev/md0 as it is the RAID0 array. /dev/md1 is just a linear array to lump the left over bits into one contiguous partition.
root@bultaco:/etc# cat raidtab # # chunk-size shootout: # # 16 = 4.8 Mb/s # 32 = 5.1 Mb/s # 64 = 4 Mb/s # raiddev /dev/md0 raid-level 0 nr-raid-disks 3 persistent-superblock 1 chunk-size 32 device /dev/hdb2 raid-disk 0 device /dev/hdc2 raid-disk 1 device /dev/sda1 raid-disk 2 raiddev /dev/md1 raid-level linear nr-raid-disks 2 persistent-superblock 1 chunk-size 32 device /dev/hdc3 raid-disk 0 device /dev/hda2 raid-disk 1
root@bultaco:/etc# mkraid /dev/md0
root@bultaco:/etc# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] read_ahead 1024 sectors md1 : active linear hda2[1] hdc3[0] 12751488 blocks 32k rounding md0 : active raid0 sda1[2] hdc2[1] hdb2[0] 26675072 blocks 32k chunks unused devices: <none>
root@bultaco:/etc#
mkreiserfs -l root /dev/md0
Here's some I've used:
cd / mkdir old new mount /dev/hda2 old mount /dev/md0 new cp -a old/* new reboot
The above "reboot" command should get you back into Linux on the old partition.
mount /dev/md0 /mnt
Change /mnt/etc/fstab to look something like this:
/dev/hdb1 none swap pri=42 /dev/hdc1 none swap pri=42 /dev/md0 / reiserfs defaults 1 1 /dev/hda6 /boot ext2 defaults 1 2 /dev/md1 /r ext3 defaults 1 2 /dev/hda2 /old reiserfs defaults 1 2Setting both swap partitions to the same priority is supposed to make them act as a RAID0 array. I don't know how to verify that it really works though.
The last line allows you to look at the old linux distribution on the single partition. For this to work you have to create the /old directory with the command:
root@bultaco:/etc#
mkdir /old
# LILO configuration file # Start LILO global section boot = /dev/hda message = /boot/boot_message.txt prompt timeout = 150 # Override dangerous defaults that rewrite the partition table: change-rules reset # VESA framebuffer console @ 1024x768x256 vga = 773 default = Raid # End LILO global section # Windows other = /dev/hda1 label = Windows table = /dev/hda # Linux RAID0 image = /boot/vmlinuz.s1 root = /dev/md0 label = Raid read-only # Linux - Original install in single partition image = /boot/vmlinuz.s1 root = /dev/hda2 label = linux-old read-onlyBy simply copying files to /dev/md0 and adding a new section to lilo.conf leaves the original Linux installation intact. In this way you can always boot with the original installation until you get the new one working (or if you want a complete, working, backup installation).
root@bultaco:/etc#
lilo -r /mnt
If lilo runs successfully, it will list the boot labels from your /mnt/etc/lilo.conf file and place a star after the default system to boot to. (in this case "Raid" is the default).
If all seems to be in order, Go For It and Reboot!
If you new RAID setup does not come up, you can reboot to the old one and fix things. Email me at davidbtdt at gmail.com if you need help.