One consistent pain has always been resizing a filesystem without rebooting on Linux.
Thank fully in recent years, this has changed with filesystems like BTRFS – where you can do something like the following….
(Examples taken from an AWS EC2 instance with EBS volumes attached – there is/was nothing special about the EBS volumes – we just created three (5,5 and 20gb in size) and attached them as xvd{e,f,g} to the ec2 instance).
/dev/xvde – 5Gb – creating our initial btrfs filesystem.
Create a btrfs filesystem….
# mkfs.btrfs -L TestFS /dev/xvde WARNING! - Btrfs v3.14.1 IS EXPERIMENTAL WARNING! - see http://btrfs.wiki.kernel.org before using Performing full device TRIM (5.00GiB) ... Turning ON incompat feature 'extref': increased hardlink limit per file to 65536 fs created label TestFS on /dev/xvde nodesize 16384 leafsize 16384 sectorsize 4096 size 5.00GiB Btrfs v3.14.1
Mount it …
mount /dev/xvde /mnt
Analyse it …
# btrfs filesystem show /mnt Label: 'TestFS' uuid: 8f4293b8-d9f9-4816-81e2-c14155b15cee Total devices 1 FS bytes used 192.00KiB devid 1 size 5.00GiB used 548.00MiB path /dev/xvde
Add some stuff to it … (3.2GB of data) and get a checksum we can compare with later on as a check.
# dd if=/dev/zero of=/mnt/test.dd bs=1024 count=3M # md5sum /mnt/test.dd c698c87fb53058d493492b61f4c74189 test.dd
Check again ….
# df -h | grep mnt /dev/xvde 5.0G 3.1G 1.5G 68% /mnt # btrfs filesystem show /mnt Label: 'TestFS' uuid: 8f4293b8-d9f9-4816-81e2-c14155b15cee Total devices 1 FS bytes used 3.00GiB devid 1 size 5.00GiB used 5.00GiB path /dev/xvde # btrfs filesystem df /mnt Data, single: total=4.47GiB, used=3.00GiB System, DUP: total=8.00MiB, used=16.00KiB System, single: total=4.00MiB, used=0.00 Metadata, DUP: total=256.00MiB, used=3.64MiB Metadata, single: total=8.00MiB, used=0.00
Everything looks like what we’d expect.
As it’s getting full…. so we’d better expand it.
Expanding the filesystem – adding /dev/xvdf (5Gb) – our first ‘extension’
We can add in /dev/xvdf on the fly – taking the total storage space to 10gb. This is comparable to doing a RAID0 (i.e. no redundancy, the data is being striped across two devices).
# btrfs device add /dev/xvdf /mnt -f Performing full device TRIM (5.00GiB) ...
(-f is necessary, as we’ve been too lazy to partition the block device).
# df -h | grep mnt /dev/xvde 10G 3.1G 6.5G 32% /mnt # btrfs filesystem show /mnt Label: 'TestFS' uuid: 8f4293b8-d9f9-4816-81e2-c14155b15cee Total devices 2 FS bytes used 3.00GiB devid 1 size 5.00GiB used 5.00GiB path /dev/xvde devid 2 size 5.00GiB used 0.00 path /dev/xvdf Btrfs v3.14.1 # btrfs filesystem df /mnt Data, single: total=4.47GiB, used=3.00GiB System, DUP: total=8.00MiB, used=16.00KiB System, single: total=4.00MiB, used=0.00 Metadata, DUP: total=256.00MiB, used=3.64MiB Metadata, single: total=8.00MiB, used=0.00
So – we’ve now got lots more room.
If we fill up the disk again, we can keep adding extensions.
Extending again — /dev/xvdg – 20Gb – add even storage
Adding /dev/xvdg to the pool –
# btrfs device add /dev/xvdg /mnt -f Performing full device TRIM (20.00GiB) ...
At this point, note that as we’ve not written any new data to /mnt, the two new partitions (xvdf and xvdg) are unused —
# btrfs filesystem show /mnt Label: 'TestFS' uuid: 8f4293b8-d9f9-4816-81e2-c14155b15cee Total devices 3 FS bytes used 3.00GiB devid 1 size 5.00GiB used 5.00GiB path /dev/xvde devid 2 size 5.00GiB used 0.00 path /dev/xvdf devid 3 size 20.00GiB used 0.00 path /dev/xvdg
If this bothers us, we can balance the storage using ‘btrfs balance start /mnt’
sometime later we’d see something like :
# btrfs balance start /mnt Done, had to relocate 13 out of 13 chunks # btrfs filesystem show /mnt Label: 'TestFS' uuid: 8f4293b8-d9f9-4816-81e2-c14155b15cee Total devices 3 FS bytes used 3.00GiB devid 1 size 5.00GiB used 32.00MiB path /dev/xvde devid 2 size 5.00GiB used 256.00MiB path /dev/xvdf devid 3 size 20.00GiB used 4.28GiB path /dev/xvdg Btrfs v3.14.1
Adding some mirroring / changing an existing btrfs filesystem to raid1
So, perhaps we need some redundancy — we can do :
# btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt
(-d is for data, -m is for meta data).
Obviously change to e.g. raid5 or something else if you feel more adventurous.
If you get bored waiting for a balance, remember you can do :
# btrfs balance status /mnt Balance on '/mnt' is running 5 out of about 6 chunks balanced (6 considered), 17% left
Now our available space decreases back to 5Gb, as that’s all that can be used for a RAID1 mirror (unfortunately btrfs doesn’t have the ‘intelligence’ to create one RAID0 volume out of xvde and xvdf and RAID 1 that with xvdg to create an effective space of 10gb.).
# btrfs filesystem df /mnt Data, RAID1: total=4.00GiB, used=3.00GiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=256.00MiB, used=3.22MiB
Removing devices from a BTRFS volume
However at some point, perhaps we want to move our storage onto one single volume – rather than having multiple ones.
So, we can remove xvde and xvdf
# btrfs device delete /dev/xvde /mnt # df | grep mnt /dev/xvdf 26214400 6300288 3600224 64% /mnt
– note that this handily updates the mount table to show the mounted device as /dev/xvdf now.
If we again try and remove /dev/xvdf we get a warning of –
# btrfs device delete /dev/xvdf /mnt ERROR: error removing the device '/dev/xvdf' - unable to go below two devices on raid1
So we can convert it back to a ‘raid0’ device, and then remove it —
# btrfs balance start -dconvert=raid0 -mconvert=raid0 -f Done, had to relocate 6 out of 6 chunks # btrfs device delete /dev/xvdf /mnt
(-f is necessary, as you’ll see from dmesg output —
[ 3289.072971] BTRFS error (device xvde): balance will reduce metadata integrity, use force if you want this
Which is a reasonable enough warning…. “obviously” we know best.
This leaves us with :
# btrfs filesystem show /mnt Label: 'TestFS' uuid: 8f4293b8-d9f9-4816-81e2-c14155b15cee Total devices 1 FS bytes used 3.00GiB devid 3 size 20.00GiB used 4.28GiB path /dev/xvdg # btrfs filesystem df /mnt Data, single: total=4.00GiB, used=3.00GiB System, single: total=32.00MiB, used=16.00KiB Metadata, single: total=256.00MiB, used=3.22MiB # df -h | grep mnt /dev/xvdg 20G 3.1G 17G 16% /mnt
And just to confirm – our md5sum still matches –
# md5sum test.dd c698c87fb53058d493492b61f4c74189 test.dd