Tag search
Moving LVM volumes to a different volume group
by Sander MarechalI recently ordered a brand new PowerEdge T105 server from Dell because my current home server, a HP ProLiant G3, is much too power hungry for my liking. It consumes about 300 Watt round the clock, increasing my power bill by nearly € 50 a month. The T105 consumer about a third of that, usually less. The new server came with an 80 GB hard disk. I partitioned it with LVM, installed Debian Lenny and moved over the bulk of my things from the old server to the new server. I did that manually over the course of two weeks because it was a good time to restructure and upgrade many other things in the process.
When I was done only one thing remained: my media collection, which is stored on a 500 GB RAID1 array on the old server. That RAID1 array is also partitioned using LVM in a single 500 GB volume group. I took the two drives out of the old server, put them in the new server, copied over /etc/mdadm/mdadm.conf from the old server and all was well. Nearly. My media collection only uses a small part of the 500 GB volume group, so I wanted to move the OS volumes from the 80 GB volume group to the 500 GB volume group. That way I could take out the 80 GB disk and save some power. Problem: There is no obvious way to move a logical volume from one volume group to another. Additional problem: I can’t run the OS from the 80 GB volume group when I am migrating them. Cue SytemRescueCD.
Benchmarking Linux filesystems on software RAID 1
by Sander MarechalA couple of months ago I got a couple of wonderful birthday presents. My lovely geeky girlfriend got me two Western Digital 500 GB SATA 3.0 drives, which were promptly supplemented with a 3ware 9550XS 4-port hardware RAID card. Immediately I came up with the idea for this article. I had just read up on mdadm software RAID (updated reference) so I though it would be perfect to bench mark the hardware RAID against the software RAID using all kinds of file systems, block sizes, chunk sizes, LVM settings, etcetera.
Or so I though… As it turns out, my (then) limited understanding of RAID and some trouble with my 3ware RAID cards meant that I had to scale back my benchmark quite a bit. I only have two disks so I was going to test RAID 1. Chunk size is not a factor when using RAID 1 so that axis was dropped from my benchmark. Then I found out that LVM (and the size of the extends it uses) are also not a factor, so I dropped another axis. And to top it off I discovered some nasty problems with 3ware 9550 RAID cards under Linux that quickly made me give up on hardware RAID. I still ended up testing various filesystems using different blocksizes and workloads on an mdadm RAID 1 setup, so the results should still prove interesting.