My EC2 instances are setup to have only the operating system and program files on the root volume, with all other data (logs, mail, etc.) on a second EBS volume. This leads to a very stable root volume, which sees a minimum of changes. Fully configured, my root volume (using Amazon’s Linux) is 1.2GB. The default size of the root volume is 8GB. Given the above, it serves little purpose for me to have so much space allocated to my root volume, and unused. I opted to shrink my root volume to 4GB, and my in future reduce this even more.
Before proceeding, it is worth noting that Amazon’s Linux uses ext4 as its root filesystem. Ext2 and ext3 root file systems can be resized in the same way, however other file systems require a different procedure.
- Snapshot root volumeThis step is done either as a backup or to create a temporary EBS volume containing the data we will copy to the new, smaller, volume.
- Create a new (empty) EBS volume of the target sizeThis will become our new root volume – so, in my case, I created a 4GB EBS volume (it should be in the same availaility zone as the instance you want to attach it to)
- Prepare your original root volumeEither:
- Stop (not terminate) the instance it is attached to, and detach the volume OR
- Create a new EBS volume using the snapshot created earlier
- Attach the volumes from the previous 2 steps to an instance
While you can attach them to the original instance, these volumes should not be mounted (only attached)
In the examples below, /dev/xvda1 refers to the original root volume, and /dev/xvdg refers to the new volume. - Run a file system check on the original volume (or volume derived from snapshot)
e2fsck -f /dev/xvda1
- Copy the data to the new volume
- Option 1: Use rsync
Format the new volume:mkfs.ext4 /dev/xvdg
Mount the two volumes, and usersync -aHAXxSP /source /target
- The ‘a’ option (archive) is recursive, copies symlinks, preserves permissions, times, groups, owners, device files, and special files
- the ‘H’ option copies hardlinks
- the ‘A’ option copies ACLs
- the ‘X’ option copies extended attributes
- the ‘x’ option don’t cross file system boundaries
- the ‘S’ option handle sparse files efficiently
- the ‘P’ option displays progress
- Option 2: Use dd
- Resize the file system of the original volume to its minimum size
Since this is an ext4 file system, we use resize2fs- the ‘M’ option shrinks to the minimum size
- the ‘p’ option displays progress
resize2fs -M -p /dev/xvda1
The above command will output the new file system size. For instance:
"Resizing the filesystem on /dev/xvda1 to 319011 (4k) blocks.
- Calculate the number of chunks
The filesystem sits at the start of the partition, and is continuous – the size corresponds to output of resize2fs from above. We want to copy everything from the start to that point.Since EBS usage charges for I/O, we want to use a somewhat large chunk size – I used 16MB.
blocks*4/(chunk_size_in_mb*1024)
– round up a bit for safety (I ended up with 78 blocks, which I rounded to 80) - Perform the actual copy of data
dd bs=16M if=/dev/xvda1 of=/dev/xvdg count=80
Note: dd uses ‘M’ as 1048576B and ‘MB’ as 1000000B
- Resize the file system on the new volume to its maximum size
resize2fs -p /dev/xvdg
- Resize the file system of the original volume to its minimum size
- Option 1: Use rsync
- Check the new file system for consistency
e2fsck -f /dev/xvdg
- Now that the data has been copied over and everything checked. We can replace our root volume on the target instance.
If the target instance is running, stop (not terminate) it
If you haven’t already, detach the root volume from the target instance.
Attach the new EBS volume to the target instance as /dev/sda1You can determine the root device by running:ec2-describe-instance-attribute `curl http://169.254.169.254/latest/meta-data/instance-id -s` --root-device-name
Start your instance and run df -h
to verify the size of your root volume.
The rsync command should read (slash after source prevents extraneous dir on target):
rsync -aHAXxSP /source/ /target
Yeah… doesn’t work for more recent ubuntu. Just detaching the old and attaching the new root as per above fails. From the system log, printed though the management console:
Sorry, I should clarify — I had troubles using the rsync method. Apparently others have also. There’s a nice discussion in the comments here for future internet stumblers: http://alestic.com/2010/02/ec2-resize-running-ebs-root
Thanks for the feedback. I don’t work much with Ubuntu, so its always good to hear how things work on it (and I am a fan of Alestic too, so I’ll certainly check it out).
Thank you. Worked well on RHEL 5.4. Only the -M is not supported in that version of “resize2fs”. Simply had to give it a minimum size at the end. e.g. “resize2fs -p /dev/xvda1 8G”
Good to know you found it useful – it isn’t uncommon for the tools on various systems to differ in the parameters they accept, but it can definitely make things annoying at times.
This worked perfectly. I shrank the EBS volume of my Ubuntu 12.04 x64 instance from 8GB to 4GB successfully. I used the “dd” option instead of rsync. Thank you.
Great to hear it worked for you. Thanks for reading and commenting.
Hi Cyberx86,
I tried to use both the methods –
rsync
anddd
Rsync
after running the command, I stopped the instance. Then I detached my original root volume and attached the new volume. After this I restarted the instance. When I tried ssh into system using putty it said authentication failure
dd method:
When I used
resize2fs -M -p /dev/xvda1
it said Online shrinking is not possibleCan you please me what wrong I am doing
Thanks
After you detach the root volume, you have to attach it to a new instance as a non-root volume. You perform the resize on this new instance. (The device should not be /dev/xvda1 – that is likely the root volume; it should be something like /dev/xvdf). As the error you got suggests, you cannot do a resize on a volume that is in use – which is why it is attached to another instance as a secondary volume (e.g. the other instance is running off its own root volume – different from the one you are resizing). Hope that helps, good luck.
Thanks cyberx86,
I finally got it. Thanks a lot. u r awesome
Hi Cyberx86,
Thanks for helping me with shrinking my EBS root volume.
I was just curious if you ever assigned two EIPs to a ubuntu instance in ec2. I tried doing that but after assigning 2 EIPs I was only able to access the instance using one EIP. With other I was getting connection timeout. The reason I need two EIPs from a single instance is that I want to run two websites from the same instance.
I followed the following procedure for associating two EIPs to a single instance:-
I created a VPC, associated two EINs to the instance and associated two EIPs to the EINs.
Is there some additional steps which I need to perform to get both EIPs working
Thanks
Lavesh
Two websites usually don’t usually need 2 IPs (the exception being some SSL setups) – you can setup virtual hosts and have many websites running under the same IP.
You can’t attach multiple EIPs to a non-VPC instance. In VPC, you only need one ENI – but you have a secondary private IP address on that ENI. When you associate the EIP with the instance, you choose which private IP it will be mapped to. Finally, you need to modify
/etc/network/interfaces
to include the new addresses. AWS provides a good overview of the procedure in their documentation. If you have trouble getting it working, I would recommend asking on ServerFault.Thanks for the reply.
Hi Cyberx86,
I was successfully able to setup software raid for ebs drives.
Wanted to know is it possible to make EBS root a part of software RAID 1 using mdadm.
I was not able to find any solutions on the net. So, wanted to know if its possible
Thanks,
Lavesh
I believe it is possible, but I would advise against it as it has many complexities that are unnecessary (the problem comes from grub and the kernel not being able to read the array prior to it being initialized). Instead, I would suggest creating a separate RAID array, and binding mount points to the relevant locations. Essentially, don’t store anything other than the operating system and core packages on your root volume – databases, code, uploads, logs, etc. can all go on your RAID array (and when you use mount with bind, you are able to make the RAID appear transparent to the system (e.g. you can bind /var/log to /mnt/raid/logs). If you care about the contents of your root volume (which really, you shouldn’t) then take snapshots of it.
Essentially I was doing the same thing which you mentioned. Created a separate raid array and attached it to OS. This raid array stores all my code, db, logs everything.
The reason I asked you if it is possible to raid root is what if the root volume goes down. what can i do in that case
Since your files are all on a separate volume, then it is just a matter of launching a new instance. You can do one of two things – make an image and launch an instance from that (not ideal – since the image will get outdated quite quickly) or take snapshots and create a new EBS volume from the snapshots. I would suggest daily snapshots of something like a root volume, and hourly snapshots of important data (remember, they are differential, so you only store the difference). If you are running only one instance, you will have a bit of downtime – but it should only take about 5 minutes to launch a new instance referencing your latest snapshot as the root volume. Frankly, if the downtime is not acceptable, then you need to look into setup a load-balanced, stateless, high-availability cluster – where any single node can go down, and the other nodes just pick up the slack until new nodes are brought online.
can the new instance created pick up the existing raid array. Also, if I am creating a daily snapshot and the new instance which gets created continue using existing db
Was just curious if my root volume only has OS related stuff then why do I need to take daily snapshot as nothing is changing over there
The new instance can pick up the raid array (since the root volume stores the configuration information for the array) – you just need to ensure you mount the relevant EBS volumes with the same device names. As for the daily snapshot, you are right that it is not needed – however, things do change – you tweak your configurations, update packages, install new packages. If nothing has changed, your snapshot will take up no space, if something has changed, only the difference will be stored in the snapshot. Since the snapshots are differential, it makes it worthwhile to take a snapshot frequently, regardless of whether or not things change.
Hi Cyberx86,
I shrink th EBS volume of my Ubuntu 14.04 x64 instance from 300GB to 5GB successfully. I used the “dd” coz rsync didn’t work fine. Its a life savior post.
Thanks man,
Shubham
Thanks for reading. Glad it worked for you.
i have also tried various steps using google, here and there but my new instance boots but unable to connect to it.
Here’s my question: http://stackoverflow.com/questions/26622794/unable-to-boot-after-shrinking-an-amazon-ebs-volume
let me know if you can help.
a) I would suggest your question will get more attention on ServerFault instead of StackOverflow as it is not a programming question.
b) Instead of creating an AMI, I would suggest just starting an instance, stopping it, detaching the root volume, and attaching your new root volume to the same location (/dev/sda1) [Although, realistically, this isn’t going to make a difference, it is an easier process if you need to do it multiple times]
c) Launch the AWS console and view the console log for the instance. It will hopefully give you some pointers on what is not working so that you can narrow your issue down.
I would add one more step at the end… At least for RHEL instances. The GRUB config is looking for a label to mount. This technique ultimately worked but I had to modify the label first.
That’s a underscore then forward slash.
I have applied all your steps.
# i used dd command 13 gb data copied
all worked well
after restart
EXT3-fs: sda1: couldn’t mount because of unsupported optional features (240).
EXT2-fs: sda1: couldn’t mount because of unsupported optional features (240).
Kernel panic – not syncing: VFS: Unable to mount root fs on unknown-block(8,1)
It sounds like the image you recovered on might have had more recent kernel and that you inadvertently setup an ext4 filesystem instead of ext3. If this is the case you will need to install the appropriate packages on the root volume before it will mount. I would recommend posting a question to ServerFault if you continue to experience errors.
HI cyberx86,
strange, i have added twice disk 32gb-/dev/sda1,/dev/sdf-18gb. # if i change order its not worked said status check failed ,
df -h
/dev/xvdf 18G 12G 5.4G 68% /
lsblk
xvda 202:0 0 32G 0 disk
xvdf 202:80 0 18G 0 disk /
blkid
/dev/xvda: LABEL=”cloudimg-rootfs” UUID=”aa697099-062b-4b53-976e-f2ac573effd9″ TYPE=”ext4″
/dev/xvdf: LABEL=”cloudimg-rootfs” UUID=”aa697099-062b-4b53-976e-f2ac573effd9″ TYPE=”ext4″
cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults 0 0
#/dev/md0 /mnt auto defaults,nobootwait,noatime 0 2
/dev/ephemeral/mnt /mnt auto defaults,nobootwait,noatime 0 2
/mnt/swap/swapfile swap swap defaults 0 1
/dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2
i already posted here..
http://serverfault.com/questions/778465/shrink-disk-aws-ec2?noredirect=1#comment982940_778465