Software RAID

The Redundant Array of Independent Disks (RAID) technology has become the standard way of mirroring hard drives within a machine or combining multiple hard drives to form one giant partition. In many types of RAID arrays, if one disk fails, the system can continue to run without data loss until you replace the failed disk or a second disk fails.

You can get RAID from the hardware or have the operating system perform the RAID operations. Hardware RAID controllers seem nice, but are in reality just decent disk controllers that run special software. Using the softraid(4) driver, OpenBSD can do the same thing, letting you build RAID arrays out of plain disks. You can do just about everything you can with a hardware RAID controller with a bunch of disks and OpenBSD’s RAID management program bioctl(8) and the softraid(4) software RAID driver.

Note

In addition to managing software RAID, OpenBSD’s bioctl(8) can manage most sorts of hardware RAID controllers. If you’re planning to use hardware RAID, reading the bioctl manual is definitely worth your time.

RAID Types

OpenBSD supports the following RAID configurations:

RAID-0, or striping

This type is not redundant. It requires at least two disks of the same size, and data is shared between the disks to increase partition size and throughput. You can use RAID-0 to combine five 4TB disks into a 20TB virtual disk, but be warned: If one hard drive in the array fails, you’ll lose all your data. RAID-0 is useful when you need a really big filesystem, but it’s more vulnerable than a single disk because it provides multiple points of failure (or as one of my quasi-literary, quasi-humorous friends once said, “RAID-0 gives a whole new meaning to the phrase one disk to rule them all”). The size of a RAID-0 array is the size of all the hard drives combined.

RAID-1, or mirroring

With this type, the contents of one disk are duplicated on another. Mirroring requires at least two disks of the same size, and the size of a RAID-1 array is equal to the size of the smallest drive in the array. I use mirroring to protect all vital data, as it gives even a cheap desktop-chassis server some measure of data protection. OpenBSD’s software RAID fully supports this level.

RAID-4, or striping data across disks, with a dedicated parity disk

This type requires at least three disks of the same size. Parity data lets a RAID array recover data on missing disks, and RAID-4 stores that parity data on a specific disk. This means that you can lose any one of the disks without losing data. As I write this, bioctl’s RAID-4 support is experimental. Hopefully this support will be complete before the book reaches you, but if not, you’ll need to use a hardware RAID card to get RAID-4.

RAID-5, or striping with parity shared across all drives

This is the current industry standard for redundancy. Parity data provides data redundancy—the loss of a single drive doesn’t destroy any data. It requires at least three disks of the same size. Unlike RAID-4, RAID-5 shares the parity data across all the drives simultaneously. While throughput isn’t as good as that of RAID-0, a RAID-5 array can simultaneously serve multiple I/O requests. The size of your RAID-5 array is the combined size of all but one of your hard drives. If you have five 4TB drives, the array will be 16TB ((5 – 1) × 4TB). Like RAID-4, RAID-5 support in bioctl is incomplete and experimental. I hope it will be complete before you read this, but if not, you’ll need to use a hardware RAID card for RAID-5.

According to the RAID standards, each of these levels requires disks of the same size. That said, OpenBSD’s softraid uses partitions rather than disks. You can use disks of different sizes, but your RAID array will use only an amount of space on each disk equal to the smallest drive. If you want to mirror a 1TB drive and a 2TB drive, your mirror will offer only 1TB of space. The excess space on the larger drive is wasted.[23]

In addition to the standard RAID methods, softraid also allows you to encrypt your data across all disks in a RAID array (as described in Encrypted Disk Partitions). It also lets you concatenate disks. Concatenated disks are just run together to create one large virtual disk. You could concatenate two 500GB disks and a 1TB disk to create a single 2TB partition. These disks don’t need to be the same size, but as with RAID-0, they are vulnerable. Damage to any one disk will completely wreck the virtual disk and lose all data. As the process for creating a concatenated disk closely resembles that of creating a RAID-0 disk, we’ll cover it in Creating softraid Devices.

Preparing Disks for softraid

The softraid software RAID device builds its virtual disks out of disklabel partitions. To use a disk in a softraid array, prepare it just as you would a disk for a regular filesystem.

On i386 and amd64, disks underlying a softraid device need an MBR partition. To mark a whole disk with a single MBR partition, run fdisk -i on the disk.

Suppose you have five disks to use in a RAID array: sd2, sd3, sd4, sd5, and sd6. You’ll need to prepare each of them as follows:

# fdisk -i sd2
Do you wish to write new MBR and partition table? [n] y
Writing MBR at offset 0.

Repeat this for every disk in your array.

Once you’ve added an MBR to all your disks, you’ll need to put a disklabel partition on each disk. I tend to use partition letter p (the last available partition letter) for softraid devices. Here’s how to set up a disk for softraid:

  # disklabel -E sd2
  Label editor (enter '?' for help at any prompt)
1 > a
2 partition: [a] p
  offset: [64]
  size: [104856191]
3 FS type: [4.2BSD] RAID
4 > q
5 Write new label?: [y] y

First, we add a partition with a 1 and assign it partition letter p 2. Instead of our usual filesystem type of 4.2BSD, we assign a filesystem type of RAID 3. Then we quit 4 and let disklabel write the changes to the disklabel partition 5.

If you have multiple identical disks, you can use disklabel to save this disk’s configuration, as follows:

# disklabel sd2 > disklabel.sd2.raid

This saves the label on disk sd2 to the file disklabel.sd2.raid. You can make disklabel(8) copy this partitioning to other disks, and disklabel will assign each disk a unique DUID as it copies. This saves you from needing to walk through the interactive editor for each disk. Let’s apply this disklabel to each partition:

# disklabel -R sd3 disklabel.sd2.raid
# disklabel -R sd4 disklabel.sd2.raid
# disklabel -R sd5 disklabel.sd2.raid
# disklabel -R sd6 disklabel.sd2.raid

Disks sd2 through sd6 are now ready for assimilation into softraid.

Creating softraid Devices

Use bioctl(8) to drag disks into a software RAID. You’ll need the disk partitions you want to include in the RAID. OpenBSD software RAID arrays are named softraid, followed by a number. Use the -c argument to give a RAID type, and -l to give the partitions, and end with the name of the softraid you’re creating.

# bioctl -c raidlevel -l partition1,partition2… softraidX

We have five disk partitions—sd2p, sd3p, sd4p, sd5p, and sd6p—to add to a softraid device. To build a RAID-5 device out of these partitions, run this command:

# bioctl -c 5 -l sd2p,sd3p,sd4p,sd5p,sd6p softraid0
softraid0: SR 1 RAID 5 volume attached as 2 sd7

The response indicates that we’ve successfully created a RAID-5 device 1, and it’s available as device /dev/sd7 2. On a blank RAID disk, which you need to prepare just as you would any other new disk, run fdisk -i sd7 and disklabel to create MBR and OpenBSD partitions, use newfs to create a filesystem on the new partitions, and you’re ready to go. (See the instructions for adding a new disk in Chapter 8 for details.)

You could have made this a RAID-0, RAID-1, or RAID-4 device by choosing a different -c option. The tricky one is a concatenated softraid. To dump all the disks together into a single concatenated virtual partition, use -c c.

# bioctl -c c -l sd2p,sd3p,sd4p,sd5p,sd6p softraid0
softraid0: SR CONCAT volume attached as sd7

softraid Status

To check the health of each device in a RAID array, give bioctl the device name of the softraid device.

# bioctl softraid0
Volume      Status               Size Device
softraid0 0 Online       214744170496 sd7     RAID5
          0 Online        53686099456 0:0.0   noencl <sd2p>
          1 Online        53686099456 0:1.0   noencl <sd3p>
          2 Online        53686099456 0:2.0   noencl <sd4p>
          3 Online        53686099456 0:3.0   noencl <sd5p>
          4 Online        53686099456 0:4.0   noencl <sd6p>

We see that the five drives are in use, all assembled into a RAID-5 virtual drive. Everything here is healthy. Anything that doesn’t look roughly like this indicates a problem.

Identifying Failed softraid Volumes

If you have a RAID-1, RAID-4, or RAID-5 softraid volume, you can lose a drive and not lose your data. bioctl tells you if a drive fails. Here, one of the drives in my softraid volume has failed:

# bioctl softraid0
Volume      Status               Size Device
softraid0 0 Degraded     214744170496 sd7     RAID5
          0 Online        53686099456 0:0.0   noencl <sd2p>
          1 Offline                  0 0:1.0   noencl <>
          2 Online        53686099456 0:2.0   noencl <sd3p>
          3 Online        53686099456 0:3.0   noencl <sd4p>
          4 Online        53686099456 0:4.0   noencl <sd6p>

Looking closely at this, I can see that drives sd2, sd3, sd4, and sd6 are still available and in use. All my data should still be intact, but I need to replace sd5 before another disk fails.

Rebuilding Failed softraid Volumes

As of this writing, you cannot rebuild a failed softraid RAID-4 or RAID-5 device. You must back up your data, replace the failed drive, delete the softraid device, re-create the filesystem, and restore from backup. You can, however, rebuild a RAID-1 device.

Let’s look at replacing a disk in a RAID-1 device. Here’s what a healthy, three-disk softraid mirror might look like:

# bioctl softraid0
Volume      Status               Size Device
softraid0 0 Online        53686099456 sd51  RAID1
          0 Online        53686099456 0:0.0   noencl <sd2p>2
          1 Online        53686099456 0:1.0   noencl <sd3p>
          2 Online        53686099456 0:2.0   noencl <sd4p>

Note that this RAID device has device node sd5 1 and includes the partitions sd2p, sd3p, and sd4p 2.

We replace two disks and reboot this machine. Suddenly, the softraid device looks very different.

# bioctl softraid0
Volume      Status               Size Device
softraid0 0 Degraded      53686099456 sd5     RAID1
          0 Offline                 0 0:0.0   noencl <>
          1 Offline                 0 0:1.0   noencl <>
          2 Online        53686099456 0:2.0   noencl <sd2p>

Partitions sd3p and sd4p are missing. That’s because the underlying disks have been replaced.[24] Prepare the replacement disks for software RAID, as discussed in Preparing Disks for softraid. Then run bioctl, using the -R flag to specify the disk to replace in the softraid device.

# bioctl -R /dev/sd3p sd5
softraid0: rebuild of sd5 started on sd3p

If you check the status of the device using bioctl, you’ll see the disk status now says “Rebuilding.”

If you have a mirror with more than two disks, you must rebuild each disk separately. Rebuild the first disk, and then rebuild the second disk.

Deleting softraid Devices

To remove a softraid device from your system, pass bioctl the -d flag and the device name for the softraid device. Here’s how to remove the RAID-5 device we just created:

# bioctl -d sd7

Warning

Once you delete the RAID device, you can’t get it back unless you re-create it and restore your data from backup.

Reusing softraid Disks

softraid writes metadata at the beginning of the disks it uses. You need to overwrite this metadata before you can use the disks in another softraid device. Overwrite the first megabyte or so of the disk with dd(1).

# dd if=/dev/zero of=/dev/sd2c bs=1k count=1024
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.594 secs (1765074 bytes/sec)

This erases the MBR partitions, any initial disklabels, and any filesystem information on the disk. You can now reuse these disks in softraid devices as normal disks.

Booting from a softraid Device

The softraid feature is still in development. Eventually, you’ll be able to use the installer to build a software RAID device, install OpenBSD on that device, and run a full RAID configuration out of the box. But as I write this, you’ll need to jump through some hoops to make that happen. Rather than document a specific procedure that will change as OpenBSD completes softraid development, I’m going to tell you to search the Internet and the [email protected] archives for the most recent instructions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.37.250