LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 05-29-2001, 01:57 AM   #1
jose stephen
LQ Newbie
 
Registered: May 2001
Posts: 2

Rep: Reputation: 0

Hai all
I have 3 scsi harddisk of capacity 18 gb each.
I configured raid using diskdruid at during the installation time of RedHat 7.0.
I configured RaID as follows

/boot -sda1 not raid
/ /dev/md0 Raid 1
/var /dev/md1 Raid 5
/home /dev/md2 Raid 5
/usr /dev/md3 Raid1
/opt /dev/md4 Raid1

Raid1 is setup in two different disks & Raid 5 is setupped in 3 different disks.

When i tested the raid by removing a hard disk and booting with boot disk the system halted at prompt
asking the root password for repairing disks.I tested by removing the disk one by one The result is the same at all times.
I read from the documentation that the raid will automatically take the data from other disks even a disk is failed. But this is not happening in my case.
What is wrong with me.Please help
 
Old 05-29-2001, 10:25 PM   #2
mcleodnine
Senior Member
 
Registered: May 2001
Location: Left Coast - Canada
Distribution: s l a c k w a r e
Posts: 2,731

Rep: Reputation: 45
Thumbs down RAID setup

While I won't profess to be authoritative on the subject, it would appear the you are trying to twist up your mount points around a 'unique' RAID implementation. My guess is that you've got redhat and it partitioned your drives so that you can divide up the high demand mount directories (/var /usr /usr/local) on different physical devices. This is a good strategy for a basic web server. It also allows you some control for security purposes as outlined in the "secure_webserver" how-to over at suse.com.

Setting up software RAID can help improve performance and and achieve a reasonable level of availability.

However this setup appears to defeat the above benefits.

If you are trying to setup mounts for enhaced security or performance, dot it without RAID or on a physical device that is not part of the RAID array. The gains made by using RAID are lost when you try to make your drives participate in several different arrays. It looks like you may have been confused by the docs and the default install as it appears that you have FIVE 'multiple-devices'. So let's go back to basics...

First - the obligatory nag to go see the HowTo. http://www.ibiblio.org/pub/Linux/doc...AID-HOWTO.html
It's the one that got me going.

Here's the quick and dirty...
Think of a multiple device directive like "/dev/md0" as a container for the physical devices that participate in the array. If your physical partitions are all on the same drive, you won't have any redundancy, and you should get LOUSY performance as each stripe in the set has to wait for the drive to finish reading/writing the previous stripe. If each partition is on a different physical device you get the benefits of redundancy and performance - especially with your SCSI setup as it more than likely supports disconnect. Your controller can write each stripe in parallel. Your software RAID can distubute the i/o to all the devices allowing greater (theoretical) throughput.

With your three physical drives you could set up your system in one of the following ways; RAID and multiple mount points. Here's how my RAID5 setups usually go...

1) Partition each of your /sdX with three partitions. To keep things simple, make each partition the same size on each drive. ie: if you make SDA1=24M, make SDB1 and SDC1 24M as well. This keeps life simple and redundant. So...

...Make one for the /boot (sda1, sdb1, sdc1) of about 24M. That gives you lots of room for your system.map, vmlinuz, etc. Mark this partition as type 83 'Linux'

...Another partition for swap space (yes, on each drive). nobody has run up and slapped me for using swap space yet, so I still use it. Mark this partition as type 82 "Linux Swap"

...A third partition for your RAID (yes, on each drive). Use up the rest of your disk (on each one) for this one. Mark them all as type fd "Linux RAID Autodetect".

Format your /boot as an ext2 filesystem ( it should default to 2048 block size as it is a small partition)

Your install routine should 'mkswap' your sdX2 parts and put them in the /etc/fstab as such.

If your install script will support a RAID install then we can move on, but I'm working blind here as I'm not a redhatter. Use the three partitions that you marked as type fd "Linux RAID Auto" for your /dev/md. I use /dev/md0 just because it's what I'm used to. Your install routine should be able to 'mkraid' and format it. from there you can install the system to / (root) on /dev/md0. If you want a real challenge you could use LVM (Linux Volume Manager) to create the different logical volumes you seem to want for your multiple mount points. I can't say I'd recommend this. I use LVM, but only for spanning multiple physical devices (sweet volume manager!) or mupltiple /dev/mdX devices (and I've only done that once).

If your install routine won't build the array for you, just make your system install everything to the first disk only my mounting your first partition on /boot and the root "/" partition on the Linux RAID Auto part you made. Your fstab should look something like this (I left out the mount options as I'm sure your distro will take care of this for you.

/dev/sda1 /boot ext2 defaults
/dev/sda2 swap swap
/dev/sda3 / ext2

Personally I recommend reiserfs for your root partition if your distro supports it out of the box (Plug for SuSE!!!) . I also use resierfs on my RAID devices as well, although I may encounter difficluties if someone ever gets a good RAID resize utility up and running.

Once your rig is up and runnig on the ONE drive, you can build your RAID array as outlined in the Root RAID section in the afformentioned How-To. I'll outline it here.

Build your /etc/raidtab file. DO NOT make your /dev/sda3 partition the first device entry! This is our current root device and a little bug in mkraid won't let us mark it as a failed-disk. (It's not really failed, but we are tricking mkraid so we can build the filesystem on RAID.) DON'T use the "***" in your raidtab file! They are only there to hi-lite the failed-drive directive.

raiddev /dev/md0
raid-level 5
nr-raid-disks 3
nr-spare-disks 0
persistent-superblock 1
parity-algorithm left-symmetric
chunk-size 32
device /dev/sdb3
raid-disk 0
device /dev/sda3
***** failed-disk 1
device /dev/sdc3
raid-disk 2

'mkraid' as per the How-To.
Format as per the How-To, or use reiserfs (I did)
Mount the new /dev/md0 to somewhere handy like /mnt
Copy your stuff from your current root as per the How-To
I made a boot floppy as I am a dumb-ass when it comes to lilo. Tell the boot floppy to use /dev/md0 as the root filesystem and obviously have RAID compiled into the kernel.
I don't make any changes to my current lilo setup until I confirm that I can boot to my crippled RAID device from the floppy.

OK.. now it looks like I'm being a gas bag. Start with this and post if you are still in grief.

Cheers,
D.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Software_raid jose stephen Linux - General 0 05-29-2001 01:58 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 07:39 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration