At home, I have my operating system drive setup in RAID 0 (striped). The advantage of this, of course, is that read and write speed is appreciably increased (my benchmarks show the array to be about 1.7x as fast as either drive on its own (mind you, I am using older drives, and they are not quite matching either). The downside of this setup, is that I double my risk of device failure leading to data loss – luckily there are no irreplaceable files on the array, and the system is imaged regularly.
Given this degree of speed increase, and the fact that Amazon permits an essentially unlimited number of EBS drives, it seems logical that one might wish to explore the possibility of setting up an EBS RAID. While this should result in a speed increase, I haven’t bothered to actually benchmark it. The benchmarks I have read do tend to indicate a speed increase, although, there are some that disagree. Given the apparent reliability of EBS and the easy of backup, data loss should not be a critical concern. The one possible downside, however, is that splitting the IO between two disks increases the total number of IO operations.
I opted to use the XFS file system (the xfsprogs
package), as it seemed best suited to my needs, and permits for easy resizing of the filesystem when the need arises.
In order to setup the RAID array, you must first create two (or more, I went with just two) EBS volumes. These must then be attached to a running instance.
Using mdadm
, create a RAID array from the attached volumes:
#mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdh1 /dev/sdh2
Finally, format the new device:
#mkfs -t xfs /dev/md0
All that needs to be done now, is to mount the device and begin using it.
Following termination of the image, you will need to reattach the volumes. Ideally, this is done through a script that runs at startup which executes ec2-attach-volume
for each volume.
You must then reassemble the RAID:
#mdadm --assemble --verbose /dev/md0 /dev/sdh1 /dev/sdh2
BEFORE I could use mdadm --assemble
, I found it necessary to run:
#depmod -a #modprobe raid0
Once these steps finish, mount the device, and you are good to go.
To undo the above steps (e.g. when shutting down):
#/bin/umount $MOUNT_POINT #mdadm -S /dev/md0 #ec2-detach-volume $VOL1 #ec2-detach-volume $VOL2
To resize a RAID volume, create a snapshot, and then, using the AWS console, create a volume from the snapshot specifying the new size of the volume.
Attach the volumes to your instance, using ec2-attach-volume
(e.g. as /dev/sdh3
and /dev/sdh4
)
Assemble, check, and mount the array:
#mdadm --assemble --verbose /dev/md1 /dev/sdh3 /dev/sdh4 #xfs_check /dev/md1 #mount -t xfs /dev/md1 /raid1
If an array error occurs, use:
#xfs_admin -U generate /dev/md1
If the device is locked, unlock using:
#fuser -km /dev/md1
Finally, resize the array using:
#xfs_growfs -d /raid1
I tried the very same thing that you’ve done and I did not get any errors while mounting the raid. However, the df -hT before and after the xfs_growfs remained the same 🙁
Any idea why?
I am not certain of your specific scenario, however, commonly problems are encountered at the 2TB mark or with GPT (as opposed to MBR) disks. I would suggest noting the output from the xfs_growfs command – the last line should display the result of the command – typically ‘data blocks changed…’.
In some cases a reboot may resolve the problem.
If a restart does not resolve the issue, consider running fdisk (with the -l option) to list the partitions and associated information – it is helpful for diagnostics. (And of course, check that the correct EBS volumes are attached – surprisingly easy to attach the old ones if you have quite a few and are doing it manually). Good luck with the setup.