HOWTO: Linux Software Raid using mdadm Ubuntu version

1) Introduction:

You want a second hard drive with the purpose of setting yourself up a raid 1, raid 5 or any other raid type then a lot of the information here would still apply.

 

2) 'Fake' raid vs Software raid:

What is a 'fake' raid. I think wikipedia explains it quite well: http://en.wikipedia.org/wiki/Redunda...ependent_disks

 

Quote:

 

Hybrid RAID implementations have become very popular with the introduction of inexpensive RAID controllers, implemented using a standard disk controller and BIOS (software) extensions to provide the RAID functionality. The operating system requires specialized RAID device drivers that present the array as a single block based logical disk. Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids", and have almost all the disadvantages of both hardware and software RAID.

 

After realising this, I spent some time trying to get this fake raid to work - the problem is that although the motherboard came with drivers that let windows see my two 250 GB drives as one large 500 GB raid array, Ubuntu just saw the two separate drives and ignored the 'fake' raid completely. There are ways to get this fake raid working under linux, however if you are presented with this situation then my advice to you is to abandon the onboard raid controller and go for software raid instead. I've seen arguments as to why software raid is faster and more flexible, but I think the best reason is that software raid is far easier to set up! http://ubuntuforums.org/images/smilies/icon_smile.gif

3) The Basics of Linux Software Raid:

For the basics of raids try looking on Wikipedia again: http://en.wikipedia.org/wiki/Redunda...ependent_disks. I don't want to discuss it myself because its been explained many times before by people who are far more qualified to explain it than I am. I will however go over a few things about software raids:

Linux software raid is more flexible than hardware raid or true raid because rather than forming a raid arrays between identical disks, the raid arrays are created between identical partitions. As far as I understand, if you are using hardware raid between (for example) two disks, then you can either create a raid 1 array between those disks, or a raid 0 array. Using software raid however, you could create two sets of identical partitions on the disks, and for a raid 0 array between two of those partitions, and a raid 1 array between the other two. If you wanted to you could probably even create a raid array between two partitions on the same disk! (not that you would want to!)

The process of setting up the a raid array is simple:

 

  1. Create two identical partitions
  2. Tell the software what the name of the new raid array is going to be, what partitions we are going to use, and what type of array we are creating (raid 0, raid 1 etc...)

 


Once we have created this array, we then format and mount it in a similar way to the way we would format a partition on a physical disk.

4) Which Live CD to use:

You want to download and burn the alternate install Ubuntu cd of your choosing, for example, I used:

 

Code:

 

ubuntu-6.10-alternate-amd64.iso

 

If you boot up the ubuntu desktop live CD and need to access your raid, then you will need to install mdadm if you want to access any software raid arrays:

 

Code:

 

sudo apt-get update

sudo apt-get install mdadm

 

Don't worry too much about this for now - you will only need this if you ever use the Ubuntu desktop cd to fix your installation, the alternate install cd has the mdadm tools installed already.

5) Finally, lets get on with it!

Boot up the installer
Boot up the alternate install CD and run through the text based installation until you reach the partitioner, and select "Partition Manually".

Create the partitions you need for each raid array
You now need to create the partitions which you will (in the next step) turn into software raid arrays. I recommend using the space at the start, or if your disks are identical, the end of your disks. That way once you've set one disk up, you can just enter exactly the same details for the second disk. The partitioner should be straightforward enough to use - when you create a partition which you intend to use in a raid, you need to change the type to "Linux RAID Autodetect".

How you partition your installation is upto you, however there are a few things to bear in mind:

 

  1. If (like me) you are going for a performance raid, then you will need to create a separate /boot partition, otherwise grub wont be able to boot - it doesn't have the drivers needed to access raid 0 arrays. It sounds simple, but it took me so long to figure out.
  2. If, on the other hand, you are doing a server installation (for example) using raid 1 / 5 and the goal is reliability, then you probably want the computer to be able to boot up even if one of the disks is down. In this situation you need to do something different with the /boot partition again. I'm not sure how it works myself, as I've never used raid 1, but you can find some more information in the links at the end of this guide. Perhaps I'll have a play around and add this to the guide later on, for completeness sake.
  3. If you are looking for performance, then there isn't a whole load of point creating a raid array for swap space. The kernel can manage multiple swap spaces by itself (we will come onto that later).
  4. Again, if you are looking for reliability however, then you may want to build a raid partition for your swap space, to prevent crashes should one of your drives fail. Again, look for more information in the links at the end.

 


On my two identical 250 GB drives, I created two 1 GB swap partitions, two +150 GB partitions (to become a raid0 array fro my /home space), and two +40 GB partitions (to become a raid 0 array for my root space), all inside an extended partition at the end of my drives. I then also created a small 500 MB partition on the first drive, which would become my /boot space. I left the rest of the space on my drives for ntfs partitions.

Assemble the partitions as raid devices
Once you've created your partitions, select the "Configure software raid" option. The changes to the partition table will be written to the disk, and you will be allowed to create and delete raid devices - to create a raid device, simply select "create", select the type of raid array you want to create, and select the partitions you want to use. Remember to check which partition numbers you are going to use in which raid arrays - if you forget, hit <Esc> a few times to bring you back to the partition editor screen where you can see whats going on.

Tell the installer how to use the raid devices
Once you are done, hit finish - you will be taken back to the partitioner where you should see some new raid devices listed. Configure these in the same way you would other partitions - set them mounts points, and decide on their filesystem type.

Finish the instalation
Once you are done setting up these raid devices (and swap / boot partitions you decide to keep as non-raid), the installation should run smootly.

6) Configuring Swap Space

I mentioned before that the linux kernel automatically manages multiple swap partitions, meaning you can spread swap partitions across multiple drives for a performance boost without needing to create a raid array. A slight tweak may be needed however; each swap partition has a priority, and if you want the kernel to use both at the same time, you need to set the priority of each swap partition to be the same. First, type

 

Code:

 

swapon -s

 

to see your current swap usage. Mine outputs the following:

 

Code:

 

Filename                                Type            Size    Used    Priority

/dev/sda5                               partition       979956  39080   -1

/dev/sdb5                               partition       979956  0       -2

 

As you can see, the second swap partition isn't being used at the moment, and won't be until the first one is full. I want a performance gain, so I need to fix this by setting the priority of each partition to be the same. Do this in /etc/fstab, by adding pri=1 as an option to each of your swap partitions. My /etc/fstab file now looks like this:

 

Code:

 

# /dev/sda5

UUID=551aaf44-5a69-496c-8d1b-28a228489404 pri=1            swap    sw              0       0

# /dev/sdb5

UUID=807ff017-a9e7-4d25-9ad7-41fdba374820 pri=1            swap    sw              0       0

 

7) How to do things manually

As I mentioned earlier, if you ever boot into your instalation with a live cd, you will need to install mdadm to be able to access your raid devices, so its a good idea to at least roughly know how mdadm works. http://man-wiki.net/index.php/8:mdadm has some detailed information, but the important options are simply:

 

Code:

 

·      -A, --assemble

·          Assemble a pre-existing array that was previously created with --create.

 

 

 

·      -C, --create

·          Create a new array. You only ever need to do this once, if you try to create arrays with partitions that are part of other arrays, mdadm will warn you.

 

 

 

·      --stop

·          Stop an assembled array. The array must be unmounted before this will work.

 

 

 

When using --create, the options are:

 

Code:

 

mdadm --create md-device --chunk=X --level=Y --raid-devices=Z devices

 

·      -c, --chunk=

·          Specify chunk size of kibibytes.  The default is 64.

 

 

 

·      -l, --level=

·          Set  raid  level, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5,  5,  raid6, 6,  multipath,  mp, fautly.

 

 

 

·      -n, --raid-devices=

·          Specify the number of active devices in the array.

 

 

 

for example:

 

Code:

 

mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1

 

will create a raid0 array /dev/md0 formed from /dev/sda1 and /dev/sdb1, with chunk size 4.

When using --assemble, the usage is simply:

 

Code:

 

mdadm --assemble md-device component-devices

 

for example

 

Code:

 

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1

 

Which will assemble the raid array /dev/md0 from the partitions /dev/sda1 and /dev/sdb1

Alternatively you can use:

 

Code:

 

mdadm --assemble --scan

 

and it will assemble any raid arrays it can detect automatically.

Lastly,

 

Code:

 

mdadm --stop /dev/md0

 

will stop the assembled array md0, so long as its not mounted.

If you wish you can set the partitions up yourself manually using fdisk and mdadm from the command line. Either boot up a desktop live cd and apt-get mdadm as described before, or boot up the alternate installer and hit escape until you see a list of the different stages of instalation - the bottom one should read execute shell - which will drop you at a shell with fdisk, mdadm and mkfs etc... available.

Note that if you ever need to create another raid partition, you create filesystems on them in exactly the same way you would a normal physical partition. For example, to create an ext3 filesystem on /dev/md0 I would use:

 

Code:

 

mkfs.ext3 /dev/md0

 

And to create a swapspace on /dev/sda7 I woud use:

 

Code:

 

mkswap /dev/sda7

 

Lastly, mdadm has a configuration file located at

 

Code:

 

/etc/mdadm/mdadm.conf

 

this file is usually automatically generated, and mdadm will probably work fine without it anyway. If you're interested then http://man-wiki.net/index.php/5:mdadm.conf has some more information.

And that's pretty much it. As long as you have mdadm available, you can create / assemble raid arrays out of identical partitions. Once you've assembled the array, treat it the same way you would a partition on a physical disk, and you can't really go wrong! http://ubuntuforums.org/images/smilies/icon_smile.gif

I hope this has helped someone! At the moment I've omitted certain aspects of dealing with raids with redundancy (like raid 1 and raid 5), such a rebuilding failed arrays, simply because I've never done it before. Again, I may have a play around and add some more information later (for completeness), or if anyone else running a raid 1 wants to contribute, it would be most welcome.

Other links

The Linux Software Raid Howto:
http://tldp.org/HOWTO/Software-RAID-HOWTO.html
This guide refers to a package "raidtools2" which I couldn't find in the Ubuntu repositories - use mdadm instead, it does the same thing.

Quick HOWTO: Linux Software Raid
http://www.linuxhomenetworking.com/w..._Software_RAID

Using mdadm to manage Linux Software Raid arrays
http://www.linuxdevcenter.com/pub/a/...2/05/RAID.html

Ubuntu Fake Raid HOWTO In the community contributed documentation
https://help.ubuntu.com/community/Fa...ght=%28raid%29

I hope this guide helps people understand and set up their raids


http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html

 


5. RAID setup

 

5.1 General setup

 

This is what you need for any of the RAID levels:

 

  • A kernel. Preferably a kernel from the 2.4 series. Alternatively a 2.0 or 2.2 kernel with the RAID patches applied.
  • The RAID tools.
  • Patience, Pizza, and your favorite caffeinated beverage.

 

All of this is included as standard in most GNU/Linux distributions today.

 

If your system has RAID support, you should have a file called /proc/mdstat. Remember it, that file is your friend. If you do not have that file, maybe your kernel does not have RAID support. See what the contains, by doing a cat /proc/mdstat. It should tell you that you have the right RAID personality (eg. RAID mode) registered, and that no RAID devices are currently active.

 

Create the partitions you want to include in your RAID set.

 

5.2 Downloading and installing the RAID tools

 

The RAID tools are included in almost every major Linux distribution.

 

IMPORTANT: If using Debian Woody (3.0) or later, you can install the package by running

 

apt-get install raidtools2

 

This raidtools2 is a modern version of the old raidtools package, which doesn't support the persistent-superblock and parity-algorithm settings.

 

5.3 Downloading and installing mdadm

 

You can download the most recent mdadm tarball at http://www.cse.unsw.edu.au/~neilb/source/mdadm/. Issue a nice make install to compile and then install mdadm and its documentation, manual pages and example files.

 

tar xvf ./mdadm-1.4.0.tgz

 

cd mdadm-1.4.0.tgz

 

make install

 

If using an RPM-based distribution, you can download and install the package file found at http://www.cse.unsw.edu.au/~neilb/source/mdadm/RPM.

 

rpm -ihv mdadm-1.4.0-1.i386.rpm 

 

If using Debian Woody (3.0) or later, you can install the package by running

 

apt-get install mdadm

 

Gentoo has this package available in the portage tree. There you can run

 

emerge mdadm

 

Other distributions may also have this package available. Now, let's go mode-specific.

 

5.4 Linear mode

 

Ok, so you have two or more partitions which are not necessarily the same size (but of course can be), which you want to append to each other.

 

Set up the /etc/raidtab file to describe your setup. I set up a raidtab for two disks in linear mode, and the file looked like this:

 

raiddev /dev/md0

 

        raid-level      linear

 

        nr-raid-disks   2

 

        chunk-size      32

 

        persistent-superblock 1

 

        device          /dev/sdb6

 

        raid-disk       0

 

        device          /dev/sdc5

 

        raid-disk       1

 

Spare-disks are not supported here. If a disk dies, the array dies with it. There's no information to put on a spare disk.

 

You're probably wondering why we specify a chunk-size here when linear mode just appends the disks into one large array with no parallelism. Well, you're completely right, it's odd. Just put in some chunk size and don't worry about this any more.

 

Ok, let's create the array. Run the command

 

  mkraid /dev/md0

 

This will initialize your array, write the persistent superblocks, and start the array.

 

If you are using mdadm, a single command like

 

   mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sdb6 /dev/sdc5

 

should create the array. The parameters talk for themselves. The output might look like this

 

  mdadm: chunk size defaults to 64K

 

  mdadm: array /dev/md0 started.

 

Have a look in /proc/mdstat. You should see that the array is running.

 

Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab and so on.

 

5.5 RAID-0

 

You have two or more devices, of approximately the same size, and you want to combine their storage capacity and also combine their performance by accessing them in parallel.

 

Set up the /etc/raidtab file to describe your configuration. An example raidtab looks like:

 

raiddev /dev/md0

 

        raid-level      0

 

        nr-raid-disks   2

 

        persistent-superblock 1

 

        chunk-size     4

 

        device          /dev/sdb6

 

        raid-disk       0

 

        device          /dev/sdc5

 

        raid-disk       1

 

Like in Linear mode, spare disks are not supported here either. RAID-0 has no redundancy, so when a disk dies, the array goes with it.

 

Again, you just run

 

  mkraid /dev/md0

 

to initialize the array. This should initialize the superblocks and start the raid device. Have a look in /proc/mdstat to see what's going on. You should see that your device is now running.

 

/dev/md0 is now ready to be formatted, mounted, used and abused.

 

5.6 RAID-1

 

You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.

 

Set up the /etc/raidtab file like this:

 

raiddev /dev/md0

 

        raid-level      1

 

        nr-raid-disks   2

 

        nr-spare-disks  0

 

        persistent-superblock 1

 

        device          /dev/sdb6

 

        raid-disk       0

 

        device          /dev/sdc5

 

        raid-disk       1

 

If you have spare disks, you can add them to the end of the device specification like

 

        device          /dev/sdd5

 

        spare-disk      0

 

Remember to set the nr-spare-disks entry correspondingly.

 

Ok, now we're all set to start initializing the RAID. The mirror must be constructed, eg. the contents (however unimportant now, since the device is still not formatted) of the two devices must be synchronized.

 

Issue the

 

  mkraid /dev/md0

 

command to begin the mirror initialization.

 

Check out the /proc/mdstat file. It should tell you that the /dev/md0 device has been started, that the mirror is being reconstructed, and an ETA of the completion of the reconstruction.

 

Reconstruction is done using idle I/O bandwidth. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely.

 

The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction.

 

Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, you're out of luck.

 

5.7 RAID-4

 

Note! I haven't tested this setup myself. The setup below is my best guess, not something I have actually had up running. If you use RAID-4, please write to the This email address is being protected from spambots. You need JavaScript enabled to view it. and share your experiences.

 

You have three or more devices of roughly the same size, one device is significantly faster than the other devices, and you want to combine them all into one larger device, still maintaining some redundancy information. Eventually you have a number of devices you wish to use as spare-disks.

 

Set up the /etc/raidtab file like this:

 

raiddev /dev/md0

 

        raid-level      4

 

        nr-raid-disks   4

 

        nr-spare-disks  0

 

        persistent-superblock 1

 

        chunk-size      32

 

        device          /dev/sdb1

 

        raid-disk       0

 

        device          /dev/sdc1

 

        raid-disk       1

 

        device          /dev/sdd1

 

        raid-disk       2

 

        device          /dev/sde1

 

        raid-disk       3

 

If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications;

 

        device         /dev/sdf1

 

        spare-disk     0

 

as usual.

 

Your array can be initialized with the

 

   mkraid /dev/md0

 

command as usual.

 

You should see the section on special options for mke2fs before formatting the device.

 

5.8 RAID-5

 

You have three or more devices of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety. Eventually you have a number of devices to use as spare-disks, that will not take part in the array before another device fails.

 

If you use N devices where the smallest has size S, the size of the entire array will be (N-1)*S. This "missing" space is used for parity (redundancy) information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost.

 

Set up the /etc/raidtab file like this:

 

raiddev /dev/md0

 

        raid-level      5

 

        nr-raid-disks   7

 

        nr-spare-disks  0

 

        persistent-superblock 1

 

        parity-algorithm        left-symmetric

 

        chunk-size      32

 

        device          /dev/sda3

 

        raid-disk       0

 

        device          /dev/sdb1

 

        raid-disk       1

 

        device          /dev/sdc1

 

        raid-disk       2

 

        device          /dev/sdd1

 

        raid-disk       3

 

        device          /dev/sde1

 

        raid-disk       4

 

        device          /dev/sdf1

 

        raid-disk       5

 

        device          /dev/sdg1

 

        raid-disk       6

 

If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications;

 

        device         /dev/sdh1

 

        spare-disk     0

 

And so on.

 

A chunk size of 32 kB is a good default for many general purpose filesystems of this size. The array on which the above raidtab is used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36) device. It holds an ext2 filesystem with a 4 kB block size. You could go higher with both array chunk-size and filesystem block-size if your filesystem is either much larger, or just holds very large files.

 

Ok, enough talking. You set up the /etc/raidtab, so let's see if it works. Run the

 

  mkraid /dev/md0

 

command, and see what happens. Hopefully your disks start working like mad, as they begin the reconstruction of your array. Have a look in /proc/mdstat to see what's going on.

 

If the device was successfully created, the reconstruction process has now begun. Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional (except for the handling of device failures of course), and you can format it and use it even while it is reconstructing.

 

See the section on special options for mke2fs before formatting the array.

 

Ok, now when you have your RAID device running, you can always stop it or re-start it using the

 

  raidstop /dev/md0

 

or

 

  raidstart /dev/md0

 

commands.

 

With mdadm you can stop the device using

 

  mdadm -S /dev/md0

 

and re-start it with

 

  mdadm -R /dev/md0

 

Instead of putting these into init-files and rebooting a zillion times to make that work, read on, and get autodetection running.

 

5.9 The Persistent Superblock

 

Back in "The Good Old Days" (TM), the raidtools would read your /etc/raidtab file, and then initialize the array. However, this would require that the filesystem on which /etc/raidtab resided was mounted. This is unfortunate if you want to boot on a RAID.

 

Also, the old approach led to complications when mounting filesystems on RAID devices. They could not be put in the /etc/fstab file as usual, but would have to be mounted from the init-scripts.

 

The persistent superblocks solve these problems. When an array is initialized with the persistent-superblock option in the /etc/raidtab file, a special superblock is written in the beginning of all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times.

 

You should however still maintain a consistent /etc/raidtab file, since you may need this file for later reconstruction of the array.

 

The persistent superblock is mandatory if you want auto-detection of your RAID devices upon system boot. This is described in the Autodetection section.

 

5.10 Chunk sizes

 

The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. Instead, we choose some chunk-size, which we define as the smallest "atomic" mass of data that can be written to the devices. A write of 16 kB with a chunk size of 4 kB, will cause the first and the third 4 kB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size.

 

Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode.

 

For optimal performance, you should experiment with the value, as well as with the block-size of the filesystem you put on the array.

 

The argument to the chunk-size option in /etc/raidtab specifies the chunk-size in kilobytes. So "4" means "4 kB".

 

RAID-0

 

Data is written "almost" in parallel to the disks in the array. Actually, chunk-size bytes are written to each disk, serially.

 

If you specify a 4 kB chunk size, and write 16 kB to an array of three disks, the RAID system will write 4 kB to disks 0, 1 and 2, in parallel, then the remaining 4 kB to disk 0.

 

A 32 kB chunk-size is a reasonable starting point for most arrays. But the optimal value depends very much on the number of drives involved, the content of the file system you put on it, and many other factors. Experiment with it, to get the best performance.

 

RAID-0 with ext2

 

The following tip was contributed by This email address is being protected from spambots. You need JavaScript enabled to view it.:

 

There is more disk activity at the beginning of ext2fs block groups. On a single disk, that does not matter, but it can hurt RAID0, if all block groups happen to begin on the same disk. Example:

 

With 4k stripe size and 4k block size, each block occupies one stripe. With two disks, the stripe-#disk-product is 2*4k=8k. The default block group size is 32768 blocks, so all block groups start on disk 0, which can easily become a hot spot, thus reducing overall performance. Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), so you can not avoid the problem by adjusting the block group size with the -g option of mkfs(8).

 

If you add a disk, the stripe-#disk-product is 12, so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1. The load caused by disk activity at the block group beginnings spreads over all disks.

 

In case you can not add a disk, try a stripe size of 32k. The stripe-#disk-product is 64k. Since you can change the block group size in steps of 8 blocks (32k), using a block group size of 32760 solves the problem.

 

Additionally, the block group boundaries should fall on stripe boundaries. That is no problem in the examples above, but it could easily happen with larger stripe sizes.

 

RAID-1

 

For writes, the chunk-size doesn't affect the array, since all data must be written to all disks no matter what. For reads however, the chunk-size specifies how much data to read serially from the participating disks. Since all active disks in the array contain the same information, the RAID layer has complete freedom in choosing from which disk information is read - this is used by the RAID code to improve average seek times by picking the disk best suited for any given read operation.

 

RAID-4

 

When a write is done on a RAID-4 array, the parity information must be updated on the parity disk as well.

 

The chunk-size affects read performance in the same way as in RAID-0, since reads from RAID-4 are done in the same way.

 

RAID-5

 

On RAID-5, the chunk size has the same meaning for reads as for RAID-0. Writing on RAID-5 is a little more complicated: When a chunk is written on a RAID-5 array, the corresponding parity chunk must be updated as well. Updating a parity chunk requires either

 

  • The original chunk, the new chunk, and the old parity block
  • Or, all chunks (except for the parity chunk) in the stripe

 

The RAID code will pick the easiest way to update each parity chunk as the write progresses. Naturally, if your server has lots of memory and/or if the writes are nice and linear, updating the parity chunks will only impose the overhead of one extra write going over the bus (just like RAID-1). The parity calculation itself is extremely efficient, so while it does of course load the main CPU of the system, this impact is negligible. If the writes are small and scattered all over the array, the RAID layer will almost always need to read in all the untouched chunks from each stripe that is written to, in order to calculate the parity chunk. This will impose extra bus-overhead and latency due to extra reads.

 

A reasonable chunk-size for RAID-5 is 128 kB, but as always, you may want to experiment with this.

 

Also see the section on special options for mke2fs. This affects RAID-5 performance.

 

5.11 Options for mke2fs

 

There is a special option available when formatting RAID-4 or -5 devices with mke2fs. The -R stride=nn option will allow mke2fs to better place different ext2 specific data-structures in an intelligent way on the RAID device.

 

If the chunk-size is 32 kB, it means, that 32 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be eight filesystem blocks in one array chunk. We can pass this information on the mke2fs utility, when creating the filesystem:

 

  mke2fs -b 4096 -R stride=8 /dev/md0

 

RAID-{4,5} performance is severely influenced by this option. I am unsure how the stride option will affect other RAID levels. If anyone has information on this, please send it in my direction.

 

The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4kB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it.