Ubuntu 10.04 and Xen Dom0 - NOT! 11
Xen as a Dom0 is not supported in Ubuntu 10.04 (Lucid Lynx) by Canonical. Both Canonical and Redhat have decided to get behind the KVM virtualization method instead. I think this was a choice driven by the required maintenance effort, since KVM hooks have been in the baseline Linux Kernel for about a year and Xen inclusion into the Linux kernel doesn’t seem likely at any point in the future. Supporting Xen kernels is just too tough.
In short, if you want to run Xen as a Dom0 on Ubuntu, you have two choices:
- Download the Xen stuff from xensource
- Stay on Ubuntu 8.04 LTS for a few more years until that support ends
- Perform some other Xen-kernel magic that doesn’t include the Xen or partner repositories
- Custom build the xen kernel on your systems.
Xen DomU support in 10.04 appears to be possible, under KVM. Don’t ask me how.
My Plans
These are simple. Stay on 8.04 LTS until I’m ready to switch to KVM on 10.04 LTS sometime in the next few years. My testing of KVM has been less than happy, so far. I cannot see changing current production systems from Xen to KVM at this point. Move and more, VMware ESXi 4.x appears viable when compared to KVM.
I used to build my own kernels and wasn’t afraid to have custom software and unusual installs on my systems. That’s fine when you have 1 or 2 boxes. When you have 20+, it simply isn’t efficient. I cannot consider that as an answer at this point, but I will admit that I should perform a little more research before completely writing this method off.
I’m not looking to be out of the core depots and security patches. I’d like to avoid that completely.
I hope KVM matures and performance improves under it, but Hope is not a plan. I need a plan sooner than later.
Trackbacks
Use the following link to trackback from your own site:
https://blog.jdpfu.com/trackbacks?article_id=634
Hi There,
I’m in a similar situation as yourself. So far, I have come up with the following (possibly incorrect) conclusion:
DomU:
If you just want to run Ubuntu 10.04 as a DomU, then I think that’s easy, as Ubuntu has the DomU kernel built into 10.04. There are some problems with using Grub2 so you have 2 options. For Xen 3.x, go over to stacklet and download the 10.04 image. They’ve just downgraded the image to use Grub1. For Xen 4.0, you’re flying. Xen 4.0 supports Grub2 so from what I’m led to believe, it works out of the box.
Dom0 and DomU:
I havn’t tried it yet, but from what I gather, it’s no too difficult to use 10.04 as Dom0 and DomU. Just use the following link which patched the kernel with pv_ops support:
http://bderzhavets.wordpress.com/2010/04/24/set-up-ubuntu-10-04-server-pv-domu-at-xen-4-0-dom0-pvops-2-6-32-10-kernel-dom0-on-top-of-ubuntu-10-04-server/
I havn’t tried any of the above as my server hasn’t arrived yet, but I’ll let you know my results
Thanks for the info. Hopefully it will help someone else. Please post again or throw me an email (these articles do not accept comments eventually). We’d like to hear your results. I’d seen that link before. Actually, it is what convinced me not to use Xen going forward.
Because I’m running multiple production servers, I’m not interested in using any kernels or Xen (virtualization) that is not supplied from an Ubuntu LTS repository or that cannot be used commercially. Personal Use Licenses don’t help me. We eat the dogfood we use with our clients. That is important.
The the lab, I switched to KVM under Ubuntu 10.04 and was not impressed. Performance sucked when compared to Xen on 8.04 even when I was using a much faster disk subsystem and CPUs with higher performance.
Since mid-last week, I removed KVM/QEMU and installed VirtualBox OSE 3.1.6 on the Ubuntu 10.04 lab machine. So far it has been stable (primary concern). Next week, I’ll run performance tests to see how it compares with the other lab systems running Xen, ESXi, and VirtualBox on an MS-Windows host. Stability and performance will be key determining factors for our transition away from Xen.
We are leaving Xen – I’ve already decided that because managing custom kernels is too much effort for our support staff.
Came across this Build Xen 4.1-unstable for Ubuntu 10.04 Server tutor today and figured someone might find it useful.
I’m still firmly in the if it doesn’t come with my distro, then I don’t want it camp.
Hi John,
I’am curious about your performance tests (Virtualbox compared to Xen, ESXi).
Have you got any results to share with us yet?
Thanks!
Sadly, other things in life and work caught up with me that need to be addressed before I can get back to performance concerns. Those pesky clients asking for things get in the way. OTOH, a paycheck is good and must take priority. Setting up and performing reasonable testing is more complex when you don’t have unlimited hardware or unlimited hours to spend.
Right now, we have in-house production running under 3 different virtualization tools – ESXi, VirtualBox, and Xen. We have clients using ESX and the all the Sun/IBM/HP UNIX tools. I need to standardize on a single x86/x64 virtualization platform to free up some machines.
Just to reiterate – VirtualBox has not been stable enough for production use. It completely locks up a system about every 4-7 days requiring a reboot. We’re still on the OSE 3.1.6 from the Ubuntu repository. Newer versions like v3.2.x seem to be a step backwards in stability according to reports that I’ve read.
Perhaps we can brainstorm a few key items for performance testing? I’m not looking for anything too complex. It needs to be as self contained as possible and as easy to reproduce as possible by me and other regardless of the VM host. Perhaps the http://www.phoronix.com test suite will suffice? At this point, any data is better than guessing.
Thanks for the quick answer! The work caught ups sound familiar.
I only just started to explore virtualisation. I run Virtualbox on Ubuntu 10.04 desktop for some testing and developing purposes. (I also ran two versions of Virtual PC in the past). My experiences with Virtualbox are positive. I don’t run VB for several days in row so can’t tell about lockups.
(Have you tried running your current version of VB using VBoxHeadless? Maybe it doesn’t lock up this way).
Besides Vbox on the desktop i’am interested in using a virtualisation product to reduce the amount of physical servers in the future. Thinking in a way of a dedicated Ubuntu server running an virtualisation solution.
I found the Phoronix test report earlier today. Shows some interesting information indeed.
I agree with you on sticking to the Ubuntu LTS repository with the kernel. I think i’ll give KVM a try and see what is does for me. Since, for the moment, my only intend is to run smaller server quest instances on the virtual host.
Greetings from the Netherlands!
If you read a little on this blog, you can see where we’ve had about 10 VMs running 24/7/365 under Xen and ESXi for a few years. Both have been solid. I prefer Xen since the hostOS is useful for backups in our method.
I’m not convince that I KNOW what is causing the lockups. It could be virtualbox, the nvidia graphics driver or the disk subsystem. The only constant is that virtualbox is running a WinXP client when it happens. Since I do video editing in the VM, I have doubts that running via VNC will be useful, but thanks for the idea. I’m concerned that about everything from Oracle now too. I just read they are ending all development on OpenOffice. How much longer until virtualbox has the same fate? I’ve liked working with Sun, but Oracle always left a bad taste in my mouth with their methods. I also see that IBM is hiring lots systems engineers for AIX and P-series deployments, so I can only guess that clients are leaving HP and Sun for IBM UNIX servers. I know that I would (LPARS rock BTW!).
Our clients have many, many VMs running under ESX and have been happy. An easy p-2-v reduction is 6:1, but for most clients you can plan on a 10:1 reduction as most servers are only have 20% utilization anyway. For some of our clients, the real savings has been with UPS needs. Buying just a few new cheap Dell systems replaced 20 older, out of warranty systems and reduced the power requirements tremendously. We also deploy iSCSI SANs so they have much greater flexibility with their storage in the future.
If you have the right hardware, try ESXi. They are the virtualization standard and almost all tools will convert between vmdk to whatever format you need if it isn’t from VMware. The performance is good too.
Please let us know how KVM works for you. I’ll cross my fingers and hope for the best!
I’m confused. Why wouldn’t you stick with Xen and just use another distro for Dom0?
Hi Pete,
Q: Why not use a different Xen distro?
A: I really didn’t consider doing what you suggest . Since there are a few more years of 8.04 LTS support, I’m not in that much of a hurry to solve it just for the Xen issue. There are other considerations for us too.
a) Xen isn’t perfect. When two of the major Linux distros decide it is too difficult to maintain Xen, we need to pay attention. Some things about Xen have caused issues here. Perhaps the Xen kernel patches from other distros would cause fewer issues than the 4-5 I’ve seen from Ubuntu that prevented boots.
b) Application upgrades are needed for most of our in-house apps at this time. That means this is an ideal time to change the VM solution and upgrade the underlying OS. We are happy with Ubuntu Server, but if we were going to change, it would be to CentOS. Neither support Xen Dom0. Upgrades from 8.04 to 10.04 LTS at this point makes sense to me.
c) Sometimes clients dictate which distros can be used and sometimes we can dictate it. RHEL, CentOS, Debian, Ubuntu ARE trusted and known. CentOS is free to use and can easily be converted to RHEL for clients that want 24/7 support from an established company – someone to blame. That means Xen isn’t a viable option for those clients.
d) I’d never really considered using a lesser known distro without a client specifically requesting it.
e) Ignorance. Getting hands on experience with other virtualization technology is important. KVM is looking fairly impressive in our recent testing and allows flexibility that some of the proprietary solutions do not without significant budget. Even with that knowledge, there is nothing like running a technology in production to gain the understanding that a client would want.
XenSource List of Distros with Xen mainly shows when Xen support was added, but it is less clear about current releases that support it.
Your suggestion does make sense and is an excellent option to consider.
This could be good news! It seems the Xen guys have found a way to get Xen Dom0 support into the kernel.
While Ubuntu hasn’t said anything about it … if it doesn’t require them to merge code, I can’t see why they wouldn’t support it.
If you are still using Ubuntu and Xen, you do have an upgrade option. Migrate to Debian Server. The latest stable release from Debian (last month?), includes both KVM and Xen support (no, not at the same time).
If you are like me and still running Ubuntu Server 8.04 LTS, there are still a few years left with Canonical patches and support before we must migrate.
Life has gotten in the way of my migration, but I have deployed 3 production VMs onto KVM and they are working well with good performance (similar to ESXi). They were previously running on ESXi 4.×.