New KVM VM Host 1

Posted by JD 06/23/2012 at 16:00

The last few weeks, we’ve been using Ubuntu 12.04 Server for internal testing as a VM host running KVM. The VMs have been a mix of 12.04, 10.04. and 8.04 systems. It has been stable with zero issues on that front. Below are the other changes recently made that you may find interesting.

New Email Gateway

We created a new KVM VM for a production gateway on the new host. It has been running well. More details about this email gateway will be posted.

Migrated a Redmine Server

Redmine is a project management, tracking and collaboration system. We’ve been running one for at least a year. It was one of the first KVM VMs under 10.04 for both the HostOS (Dom0) and clientOS (DomU). Migration was just a shutdown of the DomU clientOS on the old 10.04 host server, scp the .img file to the new 12.04 server, then use virt-manager to import the old file.img into the new VM host using similar settings for

  • RAM
  • CPU
  • Chip Architecture
  • and MAC address (there are reasons to keep the same MAC)

It was simple and came right up. The migration was from a Core i5 box to a Core2 X6800, but because the Core2 wasn’t used for anything else at the time, it felt faster. That has changed the last week.

I’m calling this a success.

Migrated this Blog Server

Since the Redmine server migration was completely without any issues, I felt confident to migrate this blog server a few days ago. I was ready for issues, but thought I’d try a risky method. Risky?

  • While the VM was still running, I created a local copy of the blog.img blob file. This is dangerous for databases, but I figured I wouldn’t lose much and it would reduce downtime to a few seconds.
  • While the old VM was still running, I scp’d the new file over to the new hostOS, placed it into the desired directory and setup the new VM settings – CPU, RAM, NIC (kept same MAC), autostart, then clicked Finished.
  • When the VM started running, I quickly hit the pause before the KVM VM booted too far.
  • I logged into the blog VM running on the old VM host and told it to shutdown -h now.
  • Back on the new VM host, I unpaused the VM. 15 seconds later and the blog was up and available.

I was shocked that this worked. It has been working flawlessly – and feels faster still.

I’m calling this another success.

Other Migrations

There are 3 other sometimes-needed VMs on the old KVM host to be migrated still. These are used for software development and as a remote desktop when I’m on travel. Migrating them is not expected to be any issue, since they are just like the already migrated blog, PM and gateway VMs.

Zimbra Migration

A few days ago, I decided we had been putting off a Zimbra upgrade long enough. The first thing that needs to happen before all those other upgrade plans can be done is Zimbra needs to be migrated off Xen and onto KVM. This was the first Xen to KVM migration. Another blog article will outline what exactly was done, but suffice it to say it really wasn’t all that hard and it didn’t take as much time as I thought it would. Best of all, there aren’t any issues – so far.

Along the way, I was able to find a HUGE Zimbra log file of 11G and remove it. This made backups drop from 30 minutes to about 4 minutes. Basically, an 11G file was copied over every night. Thankfully, rdiff-backup compresses changed files, so the backup storage never got too large.

What are the planned upgrades?

  1. Ubuntu 8.04 to 10.04 migration; 12.04 is not officially supported by Zimbra today, which makes perfect sense to me. The Zimbra guys are pretty awesome making a fairly complex system with thousands of fantastic features all work together pretty well. Zimbra is an MS-Exchange replacement in every way.
  2. Zimbra system update to current GA release. We aren’t too far behind, but every Zimbra update brings risks.

I’m calling this Xen-2-KVM migration another success.

More Xen VM Migrations

The next migration set includes other Xen VMs. I’m a little worried about these, but have a Xen to KVM migration plan based on this technique that worked well enough for Zimbra, so I’m extremely confident it will work for the other VMs.

Once all the older Xen VMs are migrated, I’ll have a spare host to load up another Ubuntu 12.04 KVM HostOS onto and will be able to split up some of the clientOS work load.

Results

I have 7 active VMs running on the new VM host and 1 remains running on the older VM host – it is a Windows7 system acting as a media center to record TV and probably will not be moved.

Observations

  • The new VM host is using about 5.6GB of RAM with plenty of CPU and RAM still available for a few more VMs.
  • I’ve been playing with the SPICE high performance remote desktop GUI, but haven’t gotten it working in the KVM sessions yet.
  • KVM Live-Migrations between VM hosts are on the schedule too. This is exciting.
  • The old VM host is using hardly any RAM 1.5GB – just enough for a 7MC Windows7 VM. Unused RAM is a waste after all.
  • I’m really happy that virt-manager on a “12.04 desktop”: allows control of both KVM VM systems (a 3rd is planned).
  • Plans for a NAS box to support live KVM migrations are still in work, but prior attempts to use popular NAS software failed due to the Via C7 CPU that I have available not being supported or an incompatible BIOS for a popular BSD distro. Time to break out pure Debian-minimal. I’m ready with a GigE NIC and eSATA port, just need an OS.

Remaining Questions

You guys are SMART and often get me thinking about different ways to handle an issue we’re having here.

  1. Have you migrated Xen to KVM? How did that go? Any tips to share?
  2. Have you upgraded to 12.04 for virtualization hosts? Any issues?
  3. Have you found a light-weight distro that can be a NAS that works on limited i586-class hardware with just 512MB of disk and 512MB of RAM? When is disk contention an issue? After 3, 4, or 5 VMs get loaded?

In the next few days, we’ll post about the email gateway and a few Ubuntu 12.04 frustrations. Sorry I haven’t been posting much the last few months, but we have been busy doing the necessary testing and gaining experience with the new stuff before we get too far.

  1. JD 06/24/2012 at 13:17

    Today I ran out of storage for the VM logical volume, LV, and needed to add some. The disks were not full – running LVM2 here, so it was a simple matter of extending the LV. This works while the file system is online and being used. Impressive. As root:


    \# lvdisplay
    \# vgdisplay qbe
    \# lvdisplay /dev/qbe/vm1
    \# lvresize -L +30G /dev/qbe/vm1
    \# lvdisplay /dev/qbe/vm1
    \# resize2fs -p /dev/mapper/qbe-vm1
    \# df

    • lvdisplay was needed to discover which volume group, VG, controlled the mounted file system.
    • vgdisplay was needed to verify that there was more storage already available in the VG. The parameter is the VG-name
    • lvresize actually adds storage while online to the LV, Logical Volume. We added 30G of storage. No striping or other fancy things here.
    • lvdisplay verifies that the storage was added, now we need to
    • resize2fs extends the file system, ext4 in this case, to use the added storage.

    That’s it. 30G more storage added to an active, running, file system.

    I should point out that if you do span HDDs for your LVs, you risk losing access to ALL the data should one of those HDDs fail. Not just the data on the failed HDD, but all the data in the LV, so be careful with your layouts. If you span drives, be certain you have excellent backups, use RAID1 AND be certain you understand the risks.