Does Windows7 Run .... X
I follow a few email lists. Whenever they list is not related to Linux, there are always MS-Windows questions. With the release of Windows7, more and more of those questions are about specific software working under Windows7, especially when there were issues under Vista. Most of this article was taken from an email concerning Investors Toolkit, TK6, and whether it will run on Win7 on a Netbook.
The Questions
Can Window7 run … whatever-program
Pondering File Transfer Speeds 1
I move files around my network alot. Most of the time, these transfers are between wired GigE connected systems and are limited by disk performance, not the network. It is good and fast. Multi-gigabyte files transfer in seconds.
However, there are some tools that only work on Windows and my only Windows machine is a WiFi (G) connected laptop. Yes, I can wire it into the GigE network and see huge transfer speeds only limited by the laptop disk drive, but that is usually not how I do it. Yes, I use WiFi for convenience.
Why is 1 tool 2x faster than others? Why?
Solved: Clock Time Loss Under Windows7 and Vista 2
How to solve this
There are many ways to solve this issue. This is just the one I used based on my experience and expertise. I didn’t use this complex solution initially, it was only after all other solutions attempted failed, badly. My Windows Vista and Win7 computers were losing 2 minutes a day. After the first attempt to correct it with daily time sync, is was still losing about a minute, which was impacting some scheduled events. 1 minute off matters when someone else sets the start and end schedule.
Stolen Laptop, What Now?
I saw a headline about stolen laptops here and thought I’d mention my methods before reading the other article.
Before Stolen Laptop
The most important stuff happens before your laptop is stolen, but you need to do it. It isn’t automatic.
Why Some Hardware in Your Computer Doesn't Work With Linux
I read a comment on a popular blog site today where people were complaining that Ubuntu didn’t work with their computer. They’d tried a few different versions and it still didn’t work. Of course, they blamed Ubuntu, not the hardware provider.
Some complained about sound or video or wireless cards not working. I’ve had issues with RAID cards not working beyond a basic level; JBOD only, no RAID support. In the old days, the complaints were with modems (win-modems) not working.
In their mind, Ubuntu wants them to switch from the other operating system and needs to do whatever it takes to support that. Clearly they are confused. Ubuntu has very little to do with which hardware is supported. Very little.
Virtualization Survey, an Overview 1
Sadly, the answer to which virtualization is best for Linux isn’t an easy one to answer. There are many different factors that go into the answer. While I cannot answer the question, since your needs and mine are different, I can provide a little background on what I chose and why. We won’t discuss why you should be running virtualization or which specific OSes to run. You already know why.
Key things that go into my answer
- I’m not new to UNIX. I’ve been using UNIX since 1992.
- I don’t need a GUI. Actually, I don’t want a GUI and the overhead that it demands.
- I would prefer to pay for support, when I need it, but not be forced to pay to do things we all need to accomplish – backups for example.
- My client OSes won’t be Windows. They will probably be the same OS as the hypervisor hosting them. There are some efficiencies in doing this like reduced virtualization overhead.
- I try to avoid Microsoft solutions. They often come with additional requirements that, in turn, come with more requirements. Soon, you’re running MS-ActiveDirectory, MS-Sharepoint, MS-SQL, and lots of MS-Windows Servers. With that come the MS-CALs. No thanks.
- We’re running servers, not desktops. Virtualization for desktops implies some other needs (sound, graphics acceleration, USB).
- Finally, we’ll be using Intel Core 2 Duo or better CPUs. They will have VT-x support enabled and 8GB+ of RAM. AMD makes fine CPUs too, but during our recent upgrade cycle, Intel had the better price/performance ratio.
Major Virtualization Choices
- VMware ESXi 4 (don’t bother with 3.x at this point)
- Sun VirtualBox
- KVM as provided by RedHat or Ubuntu
- Xen as provided by Ubuntu
I currently run all of these except KVM, so I think I can say which I prefer and which is proven.
ESXi 4.x
I run this on a test server just to gain knowledge. I’ve considered becoming VMware Certified and may still get certified, which is really odd. I don’t believe many mainstream certifications mean much, except CISSP, VMware, Oracle DBA and Cisco. I dislike that VMware has disabled things that used to work in prior versions to encourage full ESX deployments over the free ESXi. Backups at the hypervisor level, for example. I’ve been using some version of VMware for about 5 years.
A negative, VMware can be picky about which hardware it will support. Always check the approved hardware list. Almost every desktop motherboard will not have a supported network card and may not like the disk controller, so spending another $30-$200 on networking will be necessary.
ESXi is rock solid. No crashes, ever. There are many very large customers running thousands of VMware ESX server hosts.
Sun VirtualBox
I run this on my laptop because it is the easiest hypervisor to use. Also, since this works on desktops, it includes USB pass thru capabilities. That’s a good thing, except, it is also the least stable hypervisor that I use. That system locks up about once a month for no apparent reason. That is unacceptable for a server under any conditions. The host OS is Windows7 x64, so that could be the stability issue. I do not play on this Windows7 machine. The host OS is almost exclusively used as a platform for running VirtualBox and very little else.
Until VirtualBox gains stability, it isn’t suitable for use on servers, IMHO.
Xen (Ubuntu patches)
I run this on 2 servers each running about 6 client Linux systems. During system updates, another 6 systems can be spawned as part of the backout plan or for testing new versions of stuff. I built the systems over the last few years using carefully selected name brand parts. I don’t use HVM mode, so each VM runs with 97% of native hardware performance by running the same kernel.
There are downsides to Xen.
- Whenever the Xen kernel gets updated, this is a big deal, requiring the hypervisor be rebooted. In fact, I’ve had to reboot the hypervisor 3 times after a single kernel update before it takes in all the clients. Now I plan for that.
- Kernel modules have to be manually copied into each VM, which isn’t a big deal, but does have to be done.
- I don’t use a GUI, that’s my preference. If you aren’t experienced with UNIX, you’ll want to find a GUI to help create, configure and manage Xen infrastructure. I have a few scripts – vm_create, kernel_update, and lots of chained backup scripts to get the work done.
- You’ll need to roll your own backup method. There are many, many, many, many options. If you’re having trouble determining which hypervisor to use, you don’t have a chance to determine the best backup method. I’ve discussed backup options extensively on this blog.
- No USB pass thru, that I’m aware. Do you know something different?
I’ve only had 1 crash after a kernel update with Xen and that was over 8 months ago. I can’t rule out cockpit error.
Xen is what Amazon EC2 uses. They have millions of VMs. Now, that’s what I call scalability. This knowledge weighed heavily on my decision.
KVM
I don’t know much about KVM. I do know that both RedHat and Ubuntu are migrating to KVM as the default virtualization hypervisor in their servers since the KVM code was added to the Linux kernel. Conanacal’s 10.04 LTS release will also include an API 100% compatible with Amazon’s EC2 API, binary compatible VM images, and VM cluster management. If I were deploying new servers today, I’d at least try the beta 9.10 Server and these capabilities. Since we run production servers on Xen, until KVM and the specific version of Ubuntu required are supported by those apps, I don’t see us migrating.
Did I miss any important concerns?
It is unlikely that your key things match mine. Let me know in the comments.
December OpenSolaris Meetup
I attended the Atlanta area OpenSolaris Meetup last night even though we were getting some major rain in the area which made the 30 minute drive challenging. Why would I bother? Swag? Scott D presenting? Being around other nerds that like Solaris? No, although those are all valid reasons too.
Even with the nasty weather, the room was packed and we had to bring in some more chairs so everyone could sit. About 20 people attended.
New stuff in ZFS
Yep, the entire meeting was about fairly new features added to ZFS on OpenSolaris. Things like data deduplication and how well it works in normal and extreme situations. The main things I took away from the talk were:
- ZFS is stable
- Data Deduplication, dedup for short, should only be used on backup areas, not on live production data, until you become comfortable with it and the performance in your environment
- Dedup happens at the block level of a zpool, anything above that level still works as designed
- Only use builds after 129 of OpenSolaris if you plan to use dedup. Earlier versions had data loss issues with the code.
- Solaris doesn’t have the dedup code yet. It is not currently scheduled for any specific release either.
- DeDup is only available in real-time now, there is no dedup thread that can be scheduled to run later. This could have unknown performance impacts (good or bad).
- ZFS supports both read and write cache devices. This means we can specify cheap and expensive SSD memory be used for caching either cache and deploy cheaper, larger SATA disks for the actual disk storage. Some cost/performance examples were shown with 10,000rpm SAS drives compared to SSD cache with 4200 SATA drives. The price was about the same, 4x more storage was available and performance was 2x better for read and about the same for write. Nice.
- ZFS has added a way to check for disk size changes – suppose your storage is external to the server and really just a logical allocation. On the storage server, you can expand the LUN that the server sees. ZFS can be configured to manually or automatically refresh disk device sizes.
- Device removal – currently there is no direct method to remove the disk from a ZFS pool. There are work arounds, however. Anyway, they are planning to release the method this year in OpenSolaris ZFS to remove a disk from a zpool.
To really get the demo, you need to accept the other great things about ZFS as a basis, then add the new capabilities on top. One of the demonstrations was how IT shops can charge back for data storage to multiple users since they are using the data, even when 20 other departments are also using the same data blocks. Basically, dedup gives you more disk storage without buying more disk.
ACLs are managed at the file system level, not the disk block level, so the dedup’ed data still can only be accessed appropriately.
Why OpenSolaris ?
Is an open source version of Sun Microsystems Solaris operating systems that runs on lots of hardware you may already own. It also runs inside most virtual machines as a client or guest. Since it looks and feels like Solaris, you can become familiar with it for zero cost on your PC at home for just the cost of disk storage – about 20GB. Sun also uses OpenSolaris to trial new features prior to placing them into the real Solaris releases. I run OpenSolaris in a virtual machine under Widnows7 using the free version of Sun’s VirtualBox hypervisor. I know others who run it directly on hardware, under Xen and under VMware hypervisors too. Just give it enough virtual disk storage and go. I think 10GB is enough to load it, but a little more, say 20GB, will let you play with it and applications more.
If you are in the market for NetApp storage, you really need to take a look at Sun’s storage servers running ZFS. The entry price is significantly less and you get all the flexibility of Solaris without giving up CIFS, iSCSI, NFS, and, in the future, fibre channel storage. Good sales job Sun.
Swag
No meetup is a success without some swag. Water bottles, t-shirts, hats, and books, were all available. We were encouraged to take some after the iPod Nano raffle was won (not by me). Pizza and sodas were also provided by the sponsors.
ESXi 4 and Win7 Pro
Last week, I setup and configured a special desktop for the accounting system for the company. Basically, it is a Windows7 Pro desktop running under ESXi 4 that the folks responsible for accounting remote (RDP) into after connecting via VPN to the special network for it. We’re small and only a few people even need access – never more than 1 at a time.
It was fairly painless to setup, install Accounting, load Payroll CD, then validate remote VPN access (which is never trivial), then setup daily backup jobs. Of course, AV, automatic patchs and nasty IE settings were configured too. Each daily backup set is about 250MB, which isn’t too bad, but more than I would have thought given the machine is idle most of the time and won’t be used more than 3 days a month. These backups are Microsoft VHD files using the built-in backup, which could be useful, but I’d rather have a complete VDMK, VDI, or Xen img file to restore.
Of course, it isn’t possible to connect to this VM without going through our VPN.
Next I need to perform a test restore to another machine under some virtualization tool that we use. Yeah, I know with the VHD, I can perform a restore someplace else, but with the VM-image file, I just point a hypervisor at it and go. Now that VirtualBox supports VMware, vdmk, files, this test really should be trivial. If it goes well, I’ll take my WinXP (MS-Office, Visio and other WinXP-only tools VM) and put it under a server-based VM too. It will be better to not travel with that stuff on my laptop anyway.
Memory Use and Win7-x86
Fantastic is the only word I can use. Windows7 x86 memory use is FANTASTIC (meaning low). I’ve done a little optimization using Vista System Optimizer after installing Win7 on my laptop – here are the results:
Win7 = Host OS
Ubuntu = Client VM – 1224MB allocated
The total system memory used with VirtualBox, Ubuntu and Windows Media Player playing a TV show is 1.75GB. 1.2GB of that is allocated to the client VM. Under Vista-64, this same config would use 2.5GB.
Running another VM, WinXP, with 1GB of use, will bring the total memory used to 2.75GB.
Win7 = Host OS
Ubuntu = Client VM – 1224MB allocated
WinXP = Client VM – 1024MB allocated.
This would use almost 4GB in Vista-64.
Even with the 32-bit limitation of 3.5GB of RAM, on my system, I actually gain more usable RAM with 32-bit Win7 over 64-bit Vista and isn’t giving more RAM to client VMs the goal?
Windows7 Final Install Revisited
I’m asking for help again with my Windows7 final installation. See, Microsoft gave the 32-bit version, not the 64-bit version. This puts a wrinkle in my original plan to host Win7 on the laptop because about 0.6GB of RAM cannot be used. On a system with only 4GB, 0.6GB is a bunch, perhaps too much to waste.
The current goal is:
JeOS/Linux-Host |____Win7-VM (MCE) |____xubuntu-VM |____WinXP-VM (Visio / MS-Office / Quicken)
RAM allocation plans
JeOS – 512MB
Win7 – 1GB
WinXP – 1GB
xubuntu – 1.5GB
If Media Center in Win7 doesn’t work well enough in a VM; safe to leave on 24/7 with USB support, this plan will be trashed. The QAM recording is nice. For me, it is about the recording, not the playback or other features.
There are other complications in using Win7 Media Center. The recorded file format, for example. That’s something for another story.