I recently ordered a brand new PowerEdge T105 server from Dell because my current home server, a HP ProLiant G3, is much too power hungry for my liking. It consumes about 300 Watt round the clock, increasing my power bill by nearly € 50 a month. The T105 consumer about a third of that, usually less. The new server came with an 80 GB hard disk. I partitioned it with LVM, installed Debian Lenny and moved over the bulk of my things from the old server to the new server. I did that manually over the course of two weeks because it was a good time to restructure and upgrade many other things in the process.
When I was done only one thing remained: my media collection, which is stored on a 500 GB RAID1 array on the old server. That RAID1 array is also partitioned using LVM in a single 500 GB volume group. I took the two drives out of the old server, put them in the new server, copied over /etc/mdadm/mdadm.conf from the old server and all was well. Nearly. My media collection only uses a small part of the 500 GB volume group, so I wanted to move the OS volumes from the 80 GB volume group to the 500 GB volume group. That way I could take out the 80 GB disk and save some power. Problem: There is no obvious way to move a logical volume from one volume group to another. Additional problem: I can’t run the OS from the 80 GB volume group when I am migrating them. Cue SytemRescueCD.
At first I tried to use a Debian Etch Live CD and an Ubuntu 8.04 Live CD to access my mdadm RAID and LVM volume groups from a Live CD but those didn’t work well. The 2.6.18 kernel in Debian Etch is too old to handle the T105 hardware. It could not get on-board gigabit ethernet working so I could not download the packages I needed to get mdadm and lvm working. Ubuntu was just as unsuccessful. While it did see my ethernet card, various kernel modules needed for mdadm and LVM are left out of the Live CD kernel in order to save space. So I went with SystemRescueCD which comes with both mdadm and LVM out-of-the-box.
The system layout is quite simple. /dev/sda1 and /dev/sdb1 make up a 500 GB mdadm RAID1 volume. This RAID volume contains an LVM volume group called “3ware”, named so because in my old server it was connected to my 3ware RAID card. It contains a single logical volume called “media”. The original 80 GB disk is on /dev/sdc1 which contains an LVM volume group called “linuxvg”. Inside that volume group are three volumes: “boot”, “root” and “swap”. Goal: Move linuxvg-root and linuxvg-boot to the 3ware volume group. Additional goal: Rename 3ware to linuxvg. The latter is more for aesthetic reasons but as a bonus it also means that there is no need to fiddle with grub or fstab settings after the move.
Before starting SystemRescueCD and start moving things around there are a few things that need to be done first. Start by making a copy of /etc/mdadm/mdadm.conf because you will need it later. Also, because the machine will be booting from the RAID array I need to install grub to those two disks.
- # grub-install /dev/sda
- # grub-install /dev/sdb
Now it’s time to boot into SystemRescueCD. I start off by copying /etc/mdadm/mdadm.conf back and starting the RAID1 array. This command scans for all the arrays defined in mdadm.conf and tries to start them.
- # mdadm --assemble --scan
Next I need to make a couple of changes to /etc/lvm/lvm.conf. If I were to scan for LVM volume groups at this point, it would find the 3ware group three times: once in /dev/md0, /dev/sda1 and /dev/sdb1. So I adjust the filter setting in lvm.conf so it will not scan /dev/sda1 and /dev/sdb1.
- filter = [ "r|/dev/cdrom|", "r|/dev/sd[ab]1|" ]
LVM can now scan the hard drives and find all the volume groups.
- # vgscan
I disable the volume groups so that I can rename them. linuxvg becomes linuxold and 3ware becomes the new linuxvg. Then I re-enable the volume groups.
- # vgchange -a n
- # vgrename linuxvg linuxold
- # vgrename 3ware linuxvg
- # vgchange -a y
Now I can create a new logical volume in the 500 Gb volume group for my boot partition and create an ext3 filesystem in it.
- # lvcreate --name boot --size 512MB linuxvg
- # mkfs.ext3 /dev/mapper/linuxvg-boot
I create mount points to mount the original boot partition and the new boot partition and then use rsync to copy all the data. Don’t use cp for this! Rsync with the -ah option will preserve all soft links, hard links and file permissions while cp does not. If you do not want to use rsync you could also use the dd command to transfer the data directly from block device to block device.
- # mkdir /mnt/src /mnt/dst
- # mount -t ext3 /dev/mapper/linuxold-boot /mnt/src
- # mount -t ext3 /dev/mapper/linuxvg-boot /mnt/dst
- # rsync -avh /mnt/src/ /mnt/dst/
- # umount /mnt/src /mnt/dst
Rinse and repeat to copy over the root filesystem.
- # lvcreate --name root --size 40960MB linuxvg
- # mkfs.ext3 /dev/mapper/linuxvg-root
- # mount -t ext3 /dev/mapper/linuxold-root /mnt/src
- # mount -t ext3 /dev/mapper/linuxvg-root /mnt/dst
- # rsync -avh /mnt/src/ /mnt/dst/
- # umount /mnt/src /mnt/dst
There's no sense in copying the swap volume. Simply create a new one.
- # lvcreate --name swap --size 1024MB linuxvg
- # mkswap /dev/mapper/linuxvg-swap
And that's it. I rebooted into Debian Lenny to make sure that everything worked and I removed the 80 GB disk from my server. While this wans’t particularly hard, I do hope that the maintainers of LVM create an lvmove command to make this even easier.
Comments
#1 Anonymous Coward
sudo apt-get install mdadm lvm2
?
#2 David M.
First you can use the ubuntu live cd to do this. As the first comment indicates you just need to use apt-get to install mdadm and lvm2. I've done this a number of times. Mainly b/c it's much faster to do then to download another iso and burn another cd.
Secondly I believe you can do this using lvm commands with the following steps.
Run vgmerge to merge your two volume groups into one. Then use pvmove to migrate you lv's off the 80GB drive. Once you have the lv's all on the raid you can use vgreduce and pvremove to get the 80GB drive out of the volume group.
#3 Sander Marechal (http://www.jejik.com)
With regards to installing mdadm and lvm on Ubuntu: I tried that. It then complained about not finding kernel modules. So I installed the kernel modules as well, ran insmod and tried again, once more without success. lvm could see the volume group on my 80 GB drive but mdadm refused to see or start the RAID1. At that point I gave up and got SystemRescueCD because I knew it had everything out of the box.
@David: Nice trick with the vgmerge and pvmove. I looked at the commands but didn't put them together like you did. Someone on #lvm recommended using rsync or dd instead so I went with that. I'll definitely try it next time such a situation comes up.
#4 furicle (http://furicle.blogspot.com)
#5 Sander Marechal (http://www.jejik.com)
#6 Thomas Harold (http://www.tgharold.com/techblog/)
One of the tricks that I did earlier this year was to move PVs around on a system where I was changing from a 160GB 7200 PRM SATA (RAID-1) to a 150GB 10k RPM SATA (RAID-1). I also used the vgmerge, pvmove, vgreduce and pvremove commands.
The other brute-force solution is to unmount the original file system, create an identically sized LV in the target VG, and simply use the dd command to copy the blocks from the old LV to the new LV. But cp or rsync and mounting both at the same time works well. Multiple ways to skin the cat and all that.
(I greatly enjoy LVM's flexibility, it's saved me lots of trouble over the past few years where I needed to change things around on a Linux box.)
#7 Sander Marechal (http://www.jejik.com)
#8 Jaco Kroon (http://jkroon.blogs.uls.co.za)
#9 Stoat
Usual warnings apply. The underlaying LUNs for the PVs in my case are 15-drive 24Tb RAID6 arrays, so this is effectively RAID60, not RAID0... :-)
#10 iSO (http://pascal-schwarz.ch)
just create a new lv on the target vg with the same size, copy using dd, test, remove lv on source vg
and about rsync/cp: doesnt "cp -arv" work aswell?
"--archive same as -dR --preserve=all"
#11 Anonymous Coward
http://pleasedonttouchthescreen.blogspot.com/2011/10/migrating-logical-volumes-between.html
Comments have been retired for this article.