KVM Virtualization on Ubuntu 9.10 Server 5

Posted by JD 03/10/2010 at 15:43

The last few days, I’ve been playing with Ubuntu Server 9.10. It hasn’t been perfect. There have been problems along the way. So everyone else knows the issues, I’ll list a few here with a little detail.

It all started during the Server 9.10 x64 installation.

Be certain to check out the comments for solutions to issues as I discover them.

Eucalyptus Cloud – Epic Fail

  • I’d planned to install the Eucalyptus Cloud, but it wouldn’t get thru the installation. I tried multiple times and even changed the boot loader and file system (I’ve seen problems with that previously on other installs). In the end, I gave up and installed a minimal 9.10 x64 server with just ssh to start.

KVM Text, then GUI

  • Next, I was after KVM, a virtualization method. I created a new network bridge for the VM to leverage. No NAT inside the host/VMs or slow user mode networking used. I used virsh, since I didn’t want to install a GUI. I was able to get a KVM VM running, but never able to connect to it consistently via VNC. After a few host + VM reboots, the networking did come up on an internal IP so I could SSH into the VM. I installed Apache2 and a few other server-type packages. The VM felt sluggish compared to both Xen and VirtualBox experiences, that was to be expected since I hadn’t attempted to optimize anything.

Here’s the VM creation script that I used. Sorry about the cross-out font, this is part of the publishing system. Copy and paste will show it perfectly. Some settings were ’x’ed out.

vmbuilder kvm ubuntu -suite=karmic —flavour=virtual —arch=amd64 \
—mirror=http://192.168.×.x:9999/ubuntu -o —libvirt=qemu:///system —tmpfs=
\
—ip=192.168.×.xx —part=vmbuilder.partition —templates=mytemplates —cpus=2 \
—user=xxxx —name=xxxx —pass=yyyyyyyyyyyyy —addpkg=vim-nox —addpkg=unattended-upgrades \
—addpkg=acpid —firstboot=boot.sh —mem=768 —hostname=vm1 —bridge=br0

That cmd creates a qcow2 file. These are certainly less than optimal since they are write on change and compressed. My storage isn’t fast enough to make that method fast enough currently. Maybe in the future.

  • Finally, after multi-minute waits for a simple `ls` command to complete, I decided to try the other KVM setup method, the virt-manager GUI. GUI means I needed to load X/Windows. Boo. I don’t want a GUI on a server wasting resources. My Xen servers don’t have a GUI and run really, really nicely, though they are paravirtual VMs.
    I worked through the new VM creation wizard and it created an .img file under /var/lib/libvirt/. Further, the VM seems to run faster, but it appears to be a full VM image file, like I use on Xen, not sparse or compressed+expanding-as-needed. It still slows down occasionally, but not so much that a server wouldn’t work well enough.

VNC Client – No Geometry Control or Remote Access

  • After loading and starting the VM under virt-manager, I open ed it. It has a limited VNC client built in. I’d like to change the default VNC geometry – 800×600 kinda sucks. I could use a hint, please. Further, remote connections aren’t possible without setting up an ssh tunnel first because the default VNC listener is on 127.0.1.1 – localhost to the KVM host. I guess being paranoid about security is good, but nobody places a new server directly on the internet immediately after an install. I don’t. See, the VNC server runs on the host, not the client VM. I’m certain it is a small change. I suppose changing the VNC server to listen for connections on the internal network is a small change in the vm.xml file.

Alfresco Installation

I’ve been running Alfresco 2.9b under Xen about 18 months for a corporate client.

  • Just for fun, I installed the Ubuntu packaged version of Alfresco 3.2. It dealt with the dependencies perfectly, but it didn’t set the necessary JAVA_HOME environment in /etc/environment or ~root/.bashrc or ~userid/.bashrc. Starting the tomcat server didn’t fail and the /opt/alfresco location isn’t used either. No connection to http://localhost:8080/alfresco or http://localhost:8080/share worked like normal Alfresco installs support.
  • A day later, JAVA_HOME set (I’m a little slow), VM rebooted and Alfresco appears to work. The administrative console and all other parts are working until I tried to add users to the internal DB using the web GUI. Then it failed. Alfresco was never fast under Xen, but it is even slower under this KVM instance on much quicker hardware. I’d hoped the Alfresco GUI would include a way to connect to LDAP authentication. It doesn’t. You’ll have to make that connection in java property files using an editor, just like with the older installation.
  • At some point, I’ll have to upgrade the company version to 3.2.x or 3.3. Until KVM performance improves, I simply don’t know what to do.

Manual Start of VMs

The default location that the GUI places the KVM image file is non-optimal, there is probably a template file that will control that placement. I want to start the VMs manually with virsh or a simple kvm VM start after I relocate the .IMG file(s) to the external array. I only gave the server 20GB of storage on /, so leaving the VM files there isn’t an option. Initial attempts to start VMs manually with qemu-x86_64 and kvm have not worked. They appear to start, but never actually boot. I’m certain I’ll figure that out.

VNC Resolution

I really want to fix the VNC resolution. According to KVM/Qemu man pages, passing -vnc -g 1280×1024×15 with the startup should change the geometry. It doesn’t work. An error is returned for the -g option. I ended up installing a VNC server inside the VM, setting the geometry to my liking and bypassing the Dom0/Host VNC server completely. You know, it would be really great if libvirt or virt-manager had a global setting per user that controlled the default VNC geometry.

Summary

So, KVM is working … just not too quickly. I need to research methods to improve KVM performance before there’s any hope of migrating from Xen to KVM. Heck, even the ESXi installation runs blazingly fast in comparison and that system is performing full hardware virtualization like KVM will. Neither are anywhere near Xen performance with paravirtualization, but Xen is going away from both Red Hat and Ubuntu support. Xen has problems when you deploy paravirtualized VMs.

Anyway, watch the comments for how I overcome each of the issues.

Trackbacks

Use the following link to trackback from your own site:
https://blog.jdpfu.com/trackbacks?article_id=549

  1. JD 03/11/2010 at 20:56

    Ok, so I’ve spent that last week or so working with KVM. The slowness that I’ve experienced seems to be due to 2 things.

    1. qcow2 files instead of raw/img files
    2. VNC

    By converting the qcow2 files (compressed, write on change) into raw files, editing the VM.xml definition to use the new file(s), CPU utilization has dropped from 25% to under 1% (usually around 0.25%) according to Virtual Machine Manager. Each VM feels snappier too.

    Here’s another blog with more details and useful information.

    Steps to Convert qcow2 disk files to raw/img files

    1. Stop the VM if it’s running:
      • sudo virsh shutdown vm1
    2. Convert the VM’s qcow2 (QEMU image format) disk image to a raw disk image. You can also relocate the files, if you like
      • qemu-img convert disk0.qcow2 -O raw disk0.raw
    3. Edit the VM.xml file; change the filename(s) and/or location
      • sudo vi /etc/libvirt/qemu/vm1.xml
    4. For a new define
      • sudo virsh define /etc/libvirt/qemu/vm1.xml
      • it appears that if you first undefine a VM, then the XML file is removed. I don’t think you want to do that, unless you’ve made a copy first.
    5. Restart the VM.
      • sudo virsh start vm1

    Be happy with better performance.

    I’ve done this file conversion with both a swap file and a file holding / inside.

    Remote Connections

    I haven’t found how to change the geometry for VNC session managed by libvirt, but I did find how to remotely connect to them with help from the libvirt wiki. Obviously, we can setup a static IP inside the VM and run

     vncserver :70 -geometry 1280×1024 -depth 16 -name vm1

    inside the VM to achieve the desired results. But I’d rather have the host OS, which is running some kind of VNC server anyway, control the geometry settings properly.

    The KVM man page claims that passing -g 1280×1024×15 on a non-virsh/libvirt kvm startup should start the VM with a VNC having that geometry. It has never worked for me. I have started VMs with a plane kvm script, bypassing libvirt, and they work fine.

  2. JD 03/23/2010 at 09:05

    A blog entry from someone who got Eucalyptus Cloud installed and working.

    Wish I would have found his blog earlier.

  3. JD 08/29/2010 at 09:47

    So, I haven’t tried KVM on a 10.04 server because the machine wasn’t available – I needed it for other tasks. That has all changed and I find myself with 2 machines that can be used for testing. Based on some discussions with some friends, it seems that KVM performance must have greatly improved since my 9.10 attempts.

    Before I head off to try KVM about, I want to perform some trials and capture some relevant performance statistics with other virtualization technologies on the same hardware. Comparing apples to oranges is never good. The 2 systems that are available are fairly different in capabilities.

    System A: formerly a Win7 Media Center

    • C2D E6600
    • 975x MB chipset (slow)
    • 2GB DDR2 RAM
    • 1TB SATA – not RAID

    System B: Main desktop – but I can swap the boot HDD for testing

    • Core i5
    • 8B DDR3 RAM
    • 500GB SATA – not RAID
    • 1TB SW RAID5 SATA array

    For logistic reasons, the C2D will be much easier to play on. That machine has been powered down for a over week.

  4. JD 11/13/2010 at 07:39

    I’ve been running a few test VMs under KVM on an Ubuntu 10.04 Server. The VMs have been solid for over 2 weeks now and performance is on-par with ESXi 4.×. It is time to migrate!

    So I want to migrate the paravirtual Xen VMs over. There are instructions for how to do this on RHEL – which I don’t use. Since all my Xen VMs are Ubuntu 8.04 and paravirtual, a script conversion should be possible. All the VMs are also “full” files, not sparse or LVM-based.

    This blog will be migrated first as it is the least important VM running.

  5. JD 02/06/2011 at 07:29

    Last week, I migrated 3 production ESXi VMs (Ubuntu Server x32 and x64 VMs) over to KVM running on Ubuntu Server x64 10.04 LTS hostOS. Migrated means – shutdown the VMDK files and backed them up into vm-flat.vmdk files, moved those to the KVM machine, and started them up. I did not change the VM container – they are still ESXi 4.x VMDK files.

    Those VMs have all been working well using a manual KVM + openvpn solution. OpenVPN to get the tap network devices created so traffic can get to the VMs. It is less than ideal since there doesn’t appear to be a way to pause/suspect the VMs to perform daily backups. I am not using libvirt or virsh. Now that I’ve seen how well KVM works for these VMs, I intend to migrate those into a Proxmox install once a physical server becomes freed in a few weeks. Proxmox likes to “own” the entire physical machine.