Brain Dump

A place to store my random thoughts and anything else I might find useful.

Thoughts on RAID for Linux

Posted by mzanfardino on October 18, 2010

The following article is for my own personal benefit, however comments are welcome, as the resulting document is limited to my personal experience with soft-RAID on Ubuntu 9.04.

Brief summary of RAID

RAID, an acronym for redundant array of independent disks, is a technology that provides increased storage reliability through redundancy, combining multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent.[1]

There are a number of RAID levels which are described in detail at For the purposes of this article I will be focusing on RAID 1 with some references to RAID 0.

Types of RAID

It’s important to note that there are essentiall three types of RAID:

  1. Hardware
  2. Software
  3. FakeRAID

In the case of hardware RAID, all the RAID functionality is handled by the hardware and does not require additional software components as the RAID drive(s) will be exposed to the underlying OS as standard devices.

Software RAID is quite different from hardware RAID. Software RAID is handled by the OS which has a number of implications in terms of availability and performance – not to mention management.

FakeRAID is partial hardware RAID without the features and functions that a true hardware RAID controller offers. This type of RAID has be come increasingly available as more and more motherboards come equipt with BIOS-RAID controls which permit the creation of a RAID array without the ability to manager it (beyound it’s creation and deletion).

FakeRAID is generally supported by Windows OS with the use of additional drivers and software RAID running on the OS. FakeRAID is supported on Linux via dmraid which permits dual-boot between Windows OS and Linux OS whilst maintaining a consistant RAID array.

More on Software RAID

There are essentially two types of software RAID for Linux:

  1. dmraid (device-mapper)
  2. mdadm (multiple disk administrator)

In the Linux kernel, the device-mapper serves as a generic framework to map one block device onto another. It forms the foundation of LVM2 and EVMS, software RAIDs, dm-crypt disk encryption, and offers additional features such as file-system snapshots.

Device-mapper works by processing data passed in from a virtual block device, that it itself provides, and then passing the resultant data on to another block device.[2]

mdadm is a Linux utility by Neil Brown that is used to manage software RAID devices, previously known as mdctl. Besides managing, it can create, delete, or monitor Linux software RAIDs. Available under version 2 or later of the GNU General Public License, mdadm is free software.

mdadm derives its name from the “md” (multiple disk) device nodes it “adm”inisters or manages.[3]

Which Soft-RAID to use?

There are a numbre of fundamental differences between dmraid and mdadm that I won’t attempt to get into here. However, the decision for which software RAID to choose comes down to one question: Will this PC be configured for dual-boot between Linux and Windows? If the answer is no, then mdadm is the solution to choose.

I have discovered that dmraid does not offer the same level of functionality – particularly when it comes to managing the RAID in cases where the RAID has become degraded or a drive has to be replaced. Early version of dmraid (including the version installed with Ubuntu 9.04) can not rebuild a degraded array. This means that even after replacing a defective drive, the RAID will remain degraded.

On the other hand, mdadm is a fully-functioning software RAID solution which provides all the management tools required to manage all aspects of the array.

Given just these few differences, it’s clear that mdadm is the better software RAID solution for a dedicate Linux PC. However, what if during the installation of the OS dmraid was selected? Can dmraid be converted to use mdadm and therefore permit leverage of all the management tools mdadm provides? Fortunately, the answer is a qualified yes! The qualification assumes that the existing RAID is 1 (and not 0).

Convertion dmraid to mdadm


For the sake of this article I will make a few assumption:

  1. System was installed with RAID 1.
  2. At least one disk in the array is fully functioning.
  3. At least one disk is available to create the new RAID 1 array
  4. The operator has some knowledge of what they are doing!

Essentially the steps that will be covered as as follows:

  1. Break dmraid.
  2. Disable FakeRAID in BIOS.
  3. Create a “broken” RAID 1 array with mdadm.
  4. Replicate data from “broken” dmraid array to newly created “broken” mdadm array.
  5. Configure system to boot from newly created “broken” mdadm array.
  6. Add “broken” dmraid disk into mdadm array.
  7. Rebuild mdadm array.

I have borrowed heavily from for this document. Please refer directly to this document with questions concerning consistency of this document or other issues not covered here.

!!! WARNING !!!

At this point I want to be sure I state clearly that if this document is to be used by someone other than me that your understanding of hardware and the Linux operating system is better than novice! I will not be held responsible for loss of data, bricked hardware, or anything else related to the steps I’m laying out here! You have been warned!

Getting Started: Breaking dmraid!

In order to begin the existing dmraid must be broken and the system must be able to boot from one of the two drives without RAID support. This is a two step process involving erasing the dmraid metadata from the drives making up the array and then removing dmraid – in the process rebuilding the linux kernel such that is does not expect a dmraid device.

To be safe, boot the system as usual and set the run level to 1. This should be done from a tty terminal and not from an open terminal window from the GUI. Use <Ctrl><Alt><F1> to open tty1 then log in and set the run level. This should to be done either as root or with root privileages via sudo.

# telinit 1

This should generate the Recovery Menu from which you can select Root – Drop to root shell prompt. After providing the root user password the system will be running in single-user mode (no multiple ttys) and logged in as root.

To break dmraid it will be necessary to erase the metadate stored on the disks that make up the array. Once the metadata has been erased the system will no longer be bootable via RAID. Therefore, it will be necessary to edit a few files in order to ensure the system remains bootable.

First, however, break the dmraid with:
dmraid -E -r /dev/sd[ab]
NOTE: this assumes the array was built with /dev/sda and /dev/sdb. Your system may vary.

Once the metadata has been removed the drives will no longer be recognized by the BIOS RAID controller as apart of the FakeRAID array. It will be necessary to tell GRUB where the root file system is. It will also be necessary to tell mount where to find the various file system partitions.

Edit /etc/fstab and substitute the appropriate devices for the dmraid-mapped devices. Example:
# vim /etc/fstab
# /etc/fstab: static file system information.
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda5 during installation
#UUID=e9eeafc1-691a-4904-8032-2cc6c75bc175 / ext4 noatime,errors=remount-ro 0 1
/dev/sda2 / ext4 noatime,errors=remount-ro 0 1
# /boot was on /dev/sda1 during installation
#UUID=c4a18ea5-336e-42a0-8da8-26f6f3d98d48 /boot ext3 noatime 0 2
/dev/sda1 /boot ext3 noatime 0 2
# swap was on /dev/sda6 during installation
#UUID=64571446-2b40-4d09-9fe0-82d262ebce14 none swap sw 0 0
/dev/sda3 none swap sw 0 0
#/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0

NOTE: You can certainly substitute the UUID for the device, but since this is only a temporary change until the mdadm RAID is complete, there is no point in doing this – just make the changes to the devices and push on.

Add a new entry to /boot/grub/menu.lst which will boot using the device map:
# vim /boot/grub/menu.lst
title Ubuntu 9.04, kernel 2.6.28-19-generic (w/o RAID)
root (hd0,0)
kernel /vmlinuz-2.6.28-19-generic root=/dev/sda2 ro
initrd /initrd.img-2.6.28-19-generic

The above code should be added to the section of menu.lst which describes all the other boot options. Note that the root directive should point to (hdx,y) and the kernel directive root option should point to the physical device defined for root.

Lastly, remove dmraid. Doing so will ensure the correct kernel is in place for the next boot.
# aptitude remove --purge dmraid

Disable FakeRAID in BIOS

At this point boot the system. During the boot be sure to modify the BIOS and disable RAID. I chose to configure SATA as AHCI and set the boot priority to boot from HDD:P0. Other BIOS may have other settings.

If all went well the system should now boot without dmraid to /dev/sda.

Create “broken” RAID 1 array with mdadm.

Once it has been established that the system is bootable without dmraid, install mdadm. This step could have been done eariler without harm to the system. Just be sure mdadm has been installed before proceeding with the following actions.

The first thing that will have to be done is to partition the disk not currently in use (/dev/sdb in this case) for use with mdadm as raid-away partitions. Assuming no physical changes will be made to the layout of the partitions, this simply requires changing the type of the partitions and formatting them with the appropriate filesystem.

Begin by changing the partition type for each of the partitions which will be members of the array to Linux raid autodetect which is type fd.
# sudo fdisk /dev/sdb

The number of cylinders for this disk is set to 77825.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs

Command (m for help): t
Partition number (1-7): 1
Hex code (type L to list codes): fd

Command (m for help):
Repeat these steps for all appropriate devices and write the results. The system will warn you that the new table will not be used until the system has booted. This is expected. Do not boot at this time.

Next, create the single-disk RAID-1 array. Note the “missing” keyword is specified as one of our devices. We are going to fill this missing device later.

# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm: array /dev/md0 started.

Note: If the above command causes mdadm to say “no such device /dev/sdb2”, then reboot, and run the command again.

If you want to use Grub 0.97 (default in the Ubuntu Linux 9.04) on RAID 1, you need to specify an older version of metadata than the default. Add the option “–metadata=0.90” to the above command. Otherwise Grub will respond with “Filesystem type unknown, partition type 0xfd” and refuse to install. This is supposedly not necessary with Grub 2.

# mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm: array /dev/md0 started.

Make sure the array has been created correctly by checking /proc/mdstat:
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]
md0 : active raid1 sdb2[1]
40064 blocks [2/1] [_U]

unused devices:

The devices are intact, however in a degraded state. (Because it’s missing half the array!)



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: