KVM Virtualization on Ubuntu 9.10 Server 5

Posted by JD 03/10/2010 at 15:43

The last few days, I’ve been playing with Ubuntu Server 9.10. It hasn’t been perfect. There have been problems along the way. So everyone else knows the issues, I’ll list a few here with a little detail.

It all started during the Server 9.10 x64 installation.

Be certain to check out the comments for solutions to issues as I discover them.

It's Been a Busy Week - Random Thoughts 1

Posted by JD 02/26/2010 at 16:19

Nothing really to report this week. Doing RMA stuff on an old Antec 550W PSU and getting an estimate to fix the Dell laptop.

xUbuntu 9.10, Adobe AIR, Random Rants 2

Posted by JD 02/19/2010 at 08:01

Last week, my main laptop died taking my main xUbuntu installation with it. Ok, it really didn’t take it, since I have backups and the hard disk was fine. Further, because I run it in a VirtualBox VM, picking it up and moving it to a different physical machine was fairly simple, once I had a machine ready for VirtualBox.

Anyway, I’ve spent the last week building a new machine, migrating Linux servers around, rebuilding a Windows7 Media Center machine, fighting with a bad power supply, poor connections in DVDs and network cables. Finally, everything is starting to work as expected. I was feeling lucky, so I decided to update the main xUbuntu desktop VM from 8.04 LTS to 9.10. Yes, I said update, not do a fresh install. BTW the 8.04 install was an upgrade from 6.06 originally.

xubuntu 8.04 —> 9.10

Virtualization Survey, an Overview 1

Posted by JD 12/22/2009 at 20:40

Sadly, the answer to which virtualization is best for Linux isn’t an easy one to answer. There are many different factors that go into the answer. While I cannot answer the question, since your needs and mine are different, I can provide a little background on what I chose and why. We won’t discuss why you should be running virtualization or which specific OSes to run. You already know why.

Key things that go into my answer

  1. I’m not new to UNIX. I’ve been using UNIX since 1992.
  2. I don’t need a GUI. Actually, I don’t want a GUI and the overhead that it demands.
  3. I would prefer to pay for support, when I need it, but not be forced to pay to do things we all need to accomplish – backups for example.
  4. My client OSes won’t be Windows. They will probably be the same OS as the hypervisor hosting them. There are some efficiencies in doing this like reduced virtualization overhead.
  5. I try to avoid Microsoft solutions. They often come with additional requirements that, in turn, come with more requirements. Soon, you’re running MS-ActiveDirectory, MS-Sharepoint, MS-SQL, and lots of MS-Windows Servers. With that come the MS-CALs. No thanks.
  6. We’re running servers, not desktops. Virtualization for desktops implies some other needs (sound, graphics acceleration, USB).
  7. Finally, we’ll be using Intel Core 2 Duo or better CPUs. They will have VT-x support enabled and 8GB+ of RAM. AMD makes fine CPUs too, but during our recent upgrade cycle, Intel had the better price/performance ratio.

Major Virtualization Choices

  1. VMware ESXi 4 (don’t bother with 3.x at this point)
  2. Sun VirtualBox
  3. KVM as provided by RedHat or Ubuntu
  4. Xen as provided by Ubuntu

I currently run all of these except KVM, so I think I can say which I prefer and which is proven.

ESXi 4.x

I run this on a test server just to gain knowledge. I’ve considered becoming VMware Certified and may still get certified, which is really odd. I don’t believe many mainstream certifications mean much, except CISSP, VMware, Oracle DBA and Cisco. I dislike that VMware has disabled things that used to work in prior versions to encourage full ESX deployments over the free ESXi. Backups at the hypervisor level, for example. I’ve been using some version of VMware for about 5 years.

A negative, VMware can be picky about which hardware it will support. Always check the approved hardware list. Almost every desktop motherboard will not have a supported network card and may not like the disk controller, so spending another $30-$200 on networking will be necessary.

ESXi is rock solid. No crashes, ever. There are many very large customers running thousands of VMware ESX server hosts.

Sun VirtualBox

I run this on my laptop because it is the easiest hypervisor to use. Also, since this works on desktops, it includes USB pass thru capabilities. That’s a good thing, except, it is also the least stable hypervisor that I use. That system locks up about once a month for no apparent reason. That is unacceptable for a server under any conditions. The host OS is Windows7 x64, so that could be the stability issue. I do not play on this Windows7 machine. The host OS is almost exclusively used as a platform for running VirtualBox and very little else.

Until VirtualBox gains stability, it isn’t suitable for use on servers, IMHO.

Xen (Ubuntu patches)

I run this on 2 servers each running about 6 client Linux systems. During system updates, another 6 systems can be spawned as part of the backout plan or for testing new versions of stuff. I built the systems over the last few years using carefully selected name brand parts. I don’t use HVM mode, so each VM runs with 97% of native hardware performance by running the same kernel.

There are downsides to Xen.

  1. Whenever the Xen kernel gets updated, this is a big deal, requiring the hypervisor be rebooted. In fact, I’ve had to reboot the hypervisor 3 times after a single kernel update before it takes in all the clients. Now I plan for that.
  2. Kernel modules have to be manually copied into each VM, which isn’t a big deal, but does have to be done.
  3. I don’t use a GUI, that’s my preference. If you aren’t experienced with UNIX, you’ll want to find a GUI to help create, configure and manage Xen infrastructure. I have a few scripts – vm_create, kernel_update, and lots of chained backup scripts to get the work done.
  4. You’ll need to roll your own backup method. There are many, many, many, many options. If you’re having trouble determining which hypervisor to use, you don’t have a chance to determine the best backup method. I’ve discussed backup options extensively on this blog.
  5. No USB pass thru, that I’m aware. Do you know something different?

I’ve only had 1 crash after a kernel update with Xen and that was over 8 months ago. I can’t rule out cockpit error.
Xen is what Amazon EC2 uses. They have millions of VMs. Now, that’s what I call scalability. This knowledge weighed heavily on my decision.

KVM

I don’t know much about KVM. I do know that both RedHat and Ubuntu are migrating to KVM as the default virtualization hypervisor in their servers since the KVM code was added to the Linux kernel. Conanacal’s 10.04 LTS release will also include an API 100% compatible with Amazon’s EC2 API, binary compatible VM images, and VM cluster management. If I were deploying new servers today, I’d at least try the beta 9.10 Server and these capabilities. Since we run production servers on Xen, until KVM and the specific version of Ubuntu required are supported by those apps, I don’t see us migrating.

Did I miss any important concerns?

It is unlikely that your key things match mine. Let me know in the comments.